Codemash 2.0.1.0

It's CodeMash time again, and once again, I'm taking notes on my Adesso tablet, and uploading them with Evernote to a CodeMash public folder.  You can see all of my scribblings there as I sync and once again, I'll have some posts to follow based on what I'm seeing here.  There have already been some really fantastic sessions.
Update:  If you look at these notes, you'll immediately see two of the most troublesome issues (for me, at least) with the Adesso pad.  First, this pad uses a regular 8x11 pad of paper that's clipped onto the Adesso pad.  This part is actually pretty great, because I don't need special paper to record notes.  In fact, I almost ran out of paper during the conference.  With other pads, I might have been done at that point, but with the Adesso, I could (in a pinch), just flip the paper over and write on the back, or grab a flyer or whatever other standard paper might be laying around.

So what's the problem?  The pad can slip.  If there's any play in the pad on the Adesso tablet, the writing becomes pretty illegible - especially on the bottom of pages.  This is pretty apparent in a few spots in these notes.

The other big problem is that it's really easy to record right over the top of a page that's already got notes on it.  I've now gotten into the habit of advancing the "electronic" page when I flip the paper page - this bit me regularly for a while after I bought the tablet.  Last week, though, I fell victim to another variation on this theme -- the "page forward" and "page back" buttons are located on the left edge of the tablet, and being a lefty, I bumped them accidentally a couple times.  Without realizing it, then, I moved the "electronic" page back to one that I'd already recorded, and just kept right on taking notes, resulting in a double-exposed page or two.

Choosing a Linux Distribution

A while back, a friend asked me some questions about getting started with Linux.  He wanted to know specifically whether he should pick up a copy of RedHat at the local retailer -- for  something on the order of $80.

I sent him back some notes about choosing a Linux install (distribution, or "distro"), and thought there might be some information here worth repeating.  Incidentally, you might find it worth trying a Linux Live CD (see below) even if you don't think you're a Linux person just to see how your hardware performs with another operating system - it can be eye-opening.

[callout title=Gnome or KDE?]When you start looking at distros, you'll immediately notice that most identify themselves as either a Gnome distro or a KDE distro.  Gnome and KDE started out as window managers, which meant that they were responsible for displaying, styling and managing the User Interface.  Over time, though, both have turned into extensive library installs, much like Microsoft's .Net framework.  If you install a lot of applications, you'll eventually end up with most of both frameworks installed, but only one of them (at a time) will actively run your UI.  In any event, you should try one of each and see if you give a hoot which one you use.  Read more about them here[/callout]

Don't install yet...

Before you install anything at all, you can play with a one or two versions of Linux to see what you like about each of them.  The easiest way for most people to do this is with a Live CD.  A Live CD (or DVD) is a bootable image of an operating system.  Just download the Live CD image (this will most likely be an ISO file), then burn the ISO file to a disc.  Leave the disc in your optical drive, reboot your computer, and make sure your computer is set to boot off the optical drive before it boots off your hard drive.

If you followed along to this point, you should see Linux booting off the disc you just created.  It'll start up, detect most of your attached devices, acquire a network address via DHCP, and you should end up with a working Linux desktop in just a few minutes without making any permanent changes at all to your machine.  When you're done, just eject the CD and reboot back into Windows.

... or don't install at all

The other way to evaluate Linux distros is with a Virtual Machine.  Both Virtual PC from Microsoft and VMWare (Player or Server) are free, and both can be incredibly valuable for testing, development, and so on.  Between the two, I'd pick VMWare -- it's fast, stable, and widely supported.  Plus, if you choose VMWare, you can download pre-built images from their Appliances site.  This can be even faster than using a Live CD once you've got VMWare installed.

The family tree

OpenSUSE 11.1, KDE 4.
Image via Wikipedia

There are probably thousands of Linux distributions, or distros.  For some sense of scale, see this list or http://distrowatch.com/.  Despite these numbers, there are relatively few popular distros, and almost all of them have been formed by "forking" an existing distribution.  Thus, any distro you pick will come from a small number of "root" distros.  Take a quick look at this diagram to see what I'm talking about.

Although the specific lineage you pick shouldn't matter much, you'll usually see support articles refer to instructions for one or more of these roots, so it's helpful to know if you're running a Suse-based distro, or a Gentoo-based distro, and so on.

For what it's worth, the most popular general-purpose distro right now is probably Ubuntu, so if you're absolutely flummoxed right now about where to start, that would be a fine one to begin with.

But what about RedHat?

If you've made it as far as firing up one or two distros, you'll realize that you're not going to get much in the RedHat box that you don't get for free somewhere else - after all, that's the Linux way.  If you still see something on their feature list that you want to pay for, go ahead, though - at least you're an informed buyer now.  Of course, if you insist on buying a distro, Suse's enterprise distro is probably worth a look, too.

Reblog this post [with Zemanta]

Fixed bid isn’t nirvanna

When a business wants a custom software deliverable, there's a basic decision to be made about how this software is sourced, or procured.  Like a buy-vs.-lease decision when you go to the car dealership, the business has to decide first if they can and/or will develop the software with internal resources, and if not, they need to solicit bids and pick a vendor to do the work.

We Talk about Software Development and Busines...
Image by Ikhlasul Amal via Flickr

For many years, the only way to obtain software development expertise from outside your company was to contract that expertise on an hourly basis.  In fact, this is still the bedrock upon which most "consulting" firms are based.  It's not sexy, but it pays the bills.  For customers, however, this can be an unrewarding experience because they still must provide most of the infrastructure of a software development shop, and in many cases, this just isn't feasible.

In order for consulting companies to meet this need, these firms began to sell "solutions."  In a broad sense, this is simply an arrangement to bundle all the work for a deliverable and charge per software deliverable, rather than per hour.

Solutions, then, are sold either on a fixed-bid basis, or on a time-and-materials (T&M) basis.  I'll forgo a full examination of either of these models here, because most people are familiar with these models.  Suffice it to say that "fixed-bid" remains by far the sexier option, because it promises predictable budget-to-expense performance for the business, and predictable forecast-to-revenue performance for the solution provider - both of which make management look great.

In actual practice, however, fixed-bid projects don't always turn out to be quite that tidy.  Requirements change, bids are supplemented by change controls, staff turns over, and any number of other minor catastrophes combine to make one or more parties in these transactions wish they'd done things very differently.

Over the last five years, then, any number of IT services companies have swung from 100% staff-aug work to take on a fair number of fixed-bid projects, and many have now started to swing back the other direction because of the pain they've experienced.  Fixed-bid is still a sexy sell, but it's now recognized as a tough delivery.

Software Development
Image by Fabio Bruna via Flickr

Customers have also had a chance to experience this pain.  When you fix the price of an IT deliverable, you're going to experience pain with each and every change you need to make.  In many cases, this will even include changes that you don't perceive to be changes at all, merely because requirements weren't spelled out clearly enough in the beginning.  Customers who aren't really good at analysis and requirements definition will very likely feel like they've be run through the wringer.

The bottom line for fixed-bid projects:  Customers who understand what they want, and can specify this in writing, will do fine in a fixed-bid scenario.  These customers are more likely to get the software they want, and their vendors are more likely to deliver this software with a minimum of drama.  If a customer doesn't understand what they want, fixed-bid probably isn't the right answer.  Instead, concentrate on prototyping or agile development to derive the right answer, but understand that the delivery schedule has to be subject to change as requirements become better understood.

Reblog this post [with Zemanta]

Is your Enterprise too big to fail?

We have met the enemy, and it is Complexity.

About a year ago, the media was flooded with debate about financial institutions that were deemed "too big to fail" by industry analysts, financial experts, and government representatives.  A failure by one of these institutions, these experts told us, would have catastrophic effects on our economy, and it was therefore incumbent upon us as a nation to keep these firms solvent.

And if you believe this, I've got some great beachfront property in Kansas I'd like to talk to you about.

[callout title=What dependencies?]How, exactly, are all these financial firms connected?  Obviously, a full exploration of this is out-of-scope here, but the single biggest problem child in the financial meltdown was the web of financial derivatives, which financial institutions applied as a sort of insurance policy, but which, in the end, turned out to be anything but that.  The idea behind these derivatives was to aggregate a bunch of debt obligations together and resell them, and financial firms believed this would act, among other things, as a sort of reinsurance guarding against risk in any of the individual debt obligations.  As these things were packaged and repackaged, though, the cumulative effect was to create a virtual ponzi scheme of obligations linking all of these firms together in a frighteningly incestuous fashion.[/callout]

"Too big to fail," you see, resulted directly from "too complicated to understand."  The real nightmare scenario for these experts wasn't the failure of a single institution, no matter how big it was -- it was the domino effect where the failure of one institution would cause a chain reaction of failures across all the top financial firms.  Here's the killer, though: these experts couldn't tell you which dominoes, exactly, would start a chain reaction, which mean that (1) these experts understood that there were horribly complicated inter-dependencies among these financial firms, and (2) they didn't understand these dependencies well enough to understand what would happen when one failed.

The problems with this scenario go on and on.  Very few people have any real grasp on the actual implementation of complex financial instruments within a single organization, let alone the contractual obligations that link firms to firms.  This includes the executives running these companies, who, in true empty-suit fashion, blindly accepted whatever horse-hockey they were fed by their underlings because there wasn't a chance in hell that they understood all the details of the operations under their control.  This isn't to say that these executives aren't bright guys -- nothing could be further from the truth.  In fact, the complexity of these operations is so staggering that no one person could ever hope to grasp the whole picture at once.

At this point, you might be wondering what this has to do with software development.  Fair enough.  The connection is the common problem: complexity.

Software developers are no strangers to complexity.  Despite an onslaught of tools, techniques, frameworks, and design patterns which all claim to make Enterprise software development "easy", our applications invariably end up complicated as hell by the time they're installed.  I opened a Visual Studio solution the other day that had 43 projects in it.  What's worse, this solution clearly followed patterns that have been demonstrated by Microsoft, so there's a clear argument to be made here that this behemoth is crafted according to industry best practices.

Dominoes line
Image via Wikipedia

A developer who's immersed in such a solution can have a ghost of a chance of understanding what's happening among those 43 projects, but just barely, and only if he's really good and really focused on this solution.  This is functionally equivalent to a Financial Analyst who understands in great detail how an individual financial derivative is constructed.  Like the financial example, though, there's no way that business leaders could ever hope to fathom the complexity of software like this, and they're the ones who are making decisions about it.

This has to end.  The status quo is one where our leader don't understand the things they're making decisions about.  How is it any wonder that they all behave like pointy-haired bosses?

It turns out that I'm not the only one that believes this.  Roger Sessions is a well-known architect and author who's been promoting simplicity since the dawn of the Doppio Macchiato.  Recently, he's been supporting his book, Simple Architectures for Complex Enterprises, with some excellent blogging and a white paper, which can be found on the book's companion site.

Honestly, some of Roger's material isn't easy to swallow.  Software designers by nature are a pretty self-assured bunch, and usually, that's well-justified, as they're also a pretty bright crowd.  We'd all like to believe that the designs we've crafted are perfectly extensible and scalable.  Furthermore, we're reluctant to admit that anything is too complicated for us to grok.

It's not easy for business leaders to grasp this stuff, either.  Since they're already oblivious to the complexity of these systems, it's a pretty big leap of faith for them to accept that complexity is the problem, and furthermore, that there can actually be a solution.

Adding to this whole mess is the predisposition of nearly all the individual role players to keep right on doing what they've always done.  Our understanding of capitalism and efficient markets suggests that groups of people will collectively find efficiency and balance, but when the individuals involved don't really understand how their actions affect outcomes, you start to see some pretty odd collective behavior.  Dan Ariely has done some remarkable work exploring this paradox in his book, Predictably Irrational.

All this suggests that it's going to take a pretty phenomenal level of commitment to change the status quo.  Let's start by admitting we've got a problem.  I believe that Complexity is the #1 problem facing Enterprise software development today, and it needs to be confronted and mastered.

Are you with me?

Reblog this post [with Zemanta]

Agile Leadership: Methodology Ain’t Enough

It's a running joke in software development that as soon as someone demonstrates that he's a good software developer, he's promoted to management, whether he wants it or not.

Project Management
Image by Cappellmeister via Flickr

In recent years, of course, many companies have addressed this to some extent, and it's now just as common to see software leaders come from a formal Project Management background, often without having had prior experience in hands-on development.  I'd argue that this still isn't ideal, unfortunately, since this simply gives you a business-oriented leader who doesn't understand the technical domain of his or her work.

A recent entry on The Hacker Chick Blog explores this issue, as well (Agile Leadership: Methodology Ain't Enough).  Read through this article, and consider a leader with a technical career path vs. a non-technical career path.  Is it reasonable to expect someone without a proper technical background to be able to facilitate the sort of interactions Abby describes in her post?

I believe that a really effective direct manager (not a manager-of-managers)  really does have to combine technical skills with business skills.  One without the other, in my experience, leads to frustration and conflict.

Reblog this post [with Zemanta]

Smashing Magazine on Mastering CSS Coding

Cascading Style Sheets (CSS) are critical to web layout these days.  If you do any web work at all, you owe it to yourself to have at least a passing knowledge of CSS, while most of us need a solid grasp of the subject.

css_example_smJust because CSS is nearly ubiquitous, however, doesn't mean that it's easy.  In fact, CSS can be just about enough to drive you batty when it's not working the way you expect.  In those moments, in can be helpful to go back to basics and make sure you're not missing something important.

Whether you're learning or refreshing, Smashing Magazine has an article that just might help you out: Mastering CSS Coding: Getting Started.  This certainly isn't a comprehensive reference, but it's a great high-level overview.

Callouts: I need a WordPress plugin

In a recent post, I wanted to break out a paragraph into a callout or sidebar -- a little box of goodness inset into the body of my post. This proved rather more difficult than I'd planned.

Initially, I crafted the callout by hand, using a DIV for the box, and two DIVs within that for the title and body. I putzed with the CSS for a while to get it looking the way I wanted, and when I finally got it looking the way I wanted, I pulled the styles out and coded them in my blog's stylesheet.

Google Reader Fail!So far, so good. This was shaping up to be a minor pain, but nothing I couldn't handle. The first big hiccup happened when I realized that the styles for my DIVs weren't showing up in Google Reader for my RSS feed (you may have noticed this).

Diagnosing this problem turned out to be a much bigger rathole than I expected. Initially, I thought that perhaps my feed just wasn't referencing my stylesheet, so I started playing with this. Before long, I ended up writing a simple WordPress plugin to add a stylesheet reference to my feed. No matter which event I hooked, though, I couldn't get the feed to do what I wanted it to do. After way too many attempts to fix this, I finally stumbled on some blogs that pointed out the root problem here: feed readers generally don't use stylesheets.
[callout title=Feed readers - FAIL]Needless to say, when I found out that feed readers don't actually use stylesheets, I was floored. Not only that, but this appears to be a problem almost across the board. There's a clear opportunity here for someone to step up and lead the way on this issue - someone, please carry this ball forward![/callout]

So, the real solution, it appears, is to have the styles embedded in the DIV definitions.  This is ugly for all sorts of reasons that are probably obvious to you, but it's the only way I can see to solve this problem if I want styles in my feed (which I do).

Given that constraint, I elected to make another plugin -- this time, to let me use quicktag-style encoding to type my callout into my post and translate this into appropriately-styled DIVs for display.  As evidenced by the callout in this post, this is progressing nicely.  I've checked this plugin into the WordPress plugin repository, though I'm sure there's still some fine-tuning to come.

When have you extended a system that didn't quite do what you wanted it to?

Reblog this post [with Zemanta]

Tilting the web — really?

Here's a gem for you, courtesy of Engadget:

Firefox 3.6 will support accelerometers, make the internet seasick (video).

Right.  I can't wait to write CSS for this.  Are you kidding me??

If you look at the picture of this browser, you can see that the frame of the browser window stays locked in place on the screen, and sure enough - as the screen is tilted, the rendering within the browser is tilted to compensate.

Mozilla Firefox
Image via Wikipedia

Can this really be considered a feature people want or need on their devices?  Here's the bottom line, Firefox:  You're running on a rectangular device, no matter how it's tilted.  There are only two reasonable layouts on a rectangular device:  portrait and landscape.  For everything in the middle, I'm going to go out on a limb here and stipulate that if someone is holding a device at an angle, it's probably because the viewer is at an angle, and thus, wants to turn the device to match their aspect.

Let's also not forget that a screen rendered at non-square angles is going to introduce all sorts of fun graphical artifacts as you re-render at a constantly-changing angle.  Remember that our fonts and graphics are optimized for a rectangular pixel layout, and anything other than square is going to require interpolated rendering of everything on the screen (which, being a mobile device, by the way, is probably of lower resolution than our desktops.  Take a look at this image:

Engadget tilted on screen
Engadget tilted on screen

Ignore, if you're able, the weird optical illusion that makes the rest of your screen now look like it's tilted to the right.  This image, in fact, shows the mobile version of Engadget's home page tilted about fifteen degrees to the left.  Note that while this is certainly still readable, the fonts show significant pixelation -- they're blurry and jagged.

You can also see that the screen is cut off on four corners.  This is because the size of the canvas that will fit perfectly inscribed within the fixed physical screen changes as the angle of tilt changes.  I'm sure there's a geometric formula that can be used to figure this out, and it would be more or less complicated depending on whether you hold the aspect ratio of the canvas constant (which could work right up to the point where you switch from portrait to landscape).  In any event, the size is going to change, and that means that you either change the size of the HTML canvas, reflowing text, images, etc., or you do still more interpolation and rendering to account for the smaller canvas size you're painting.  Either way, I think the user loses.

Can you just picture the bugs that are going to come in once people start using this?

  • Your app blinks when I tilt the screen.
  • When I tilt my screen, I can see everything just fine, but when I tilt it back, the left side of the text is cut off.
  • When I tilt on FireFox, your app works fine, but when I tilt on Chrome, it doesn't work at all.
  • I got a migraine because of your app.
  • What's with the screen swimming around??

Can anyone, anywhere really come up with a scenario where I, as a user of a mobile device, am going to hold my device at an angle and hope that my software can figure out how to render a pixelated, ugly version of the app I'm using at an angle that now, no longer matches the angle of my screen, and (dare I add) keyboard?

On behalf of all web developers everywhere on Earth, Firefox, I implore you:

Stop the madness!!

Reblog this post [with Zemanta]

The construction analogy

I did it again - just like so many others before me.

Last night, Rob Conery dropped a rant about an industry heavyweight who, Rob felt, was doing us all a giant disservice by suggesting that perhaps less could be more, process-wise.  So, of course, I chimed in, and I used the tired, old construction analogy -- you know -- building software is like constructing a house, right?

Sunset over Pearl Qatar.
Image by fatboyke (Luc) via Flickr

In my defense, the specific point I was trying to make was that the software industry isn't mature in the same way as construction.  Where you have building codes in Civil Engineering, for instance, there's no equivalent binding set of rules governing how software is built -- just a nebulous collection of "best practices".

Then, this morning, I read another post on the IxDA discussion board trying to explain how software design (specifically, functional specifications) is like the function an architect performs when someone builds a home.

Clearly, there are holes in that analogy, and I pointed out a couple in a comment on that site.  The complexity (in terms of options) in construction pales next to software development in all but trivial applications.  As I mentioned, in my comment, a buyer could sit with an architect and describe parts of a home in pretty general terms, and be quite satisfied with the resulting home, because there already exists a rich vocabulary of assumptions about how homes are supposed to work.

Software interface design is automatically more complicated than physical buildings in part because we’re emulating “real” things in software.  All the metaphors and affordances we design into our software provide unlimited complexity, such that any conversation about what a customer wants to see in software has to occur at a high level of detail.

Beyond specifying behavior, the process of construction proceeds quite differently in software, as well.  When building a home, you quickly come to understand that some aspects of construction are literally “set in stone” very early in the project, and more importantly, these things are intuitively understood by the customer.  If a homeowner wishes to change a foundation or the frame of a home, he’s not surprised to learn that this will be an expensive change; anyone can see how one is built upon another.

Building codes also help constrain choices (and therefore, costs) when building a home.  There’s simply no arguing about how studs are spaced, or the materials that are used for different parts of the home, because these are mandated by building codes.  In software, everything is negotiable, which implies that it must be specified.  Once again, the analogy breaks down.

As many times as I’ve seen the construction analogy, it’s clear that a lot of people want it to work.  Just remember that it only works to a point.

Enhanced by Zemanta

Cloud pricing: is Azure competitive?

Now that Azure's pricing model is starting to take shape, we're starting to see what Microsoft meant when they said Azure would be priced "competitively".  This week, Oakleaf Systems compared Azure to Amazon's cloud offerings, highlighting costs for developers to gets started on these platforms (Amazon Undercuts Windows Azure Table and SQL Azure Costs with Free SimpleDB Quotas).

Clouds on Steroids
Image by ...-Wink's in Mexicali-... via Flickr

I believe Microsoft may be able to get away with pricing at a slight premium to Amazon and GAE, in large part because SQL Azure is an absolute killer feature because of its compatibility with existing SQL Server code.  Nevertheless, Azure pricing has to be aggressive, or the value proposition for Azure starts to look weak.  I think it's especially important to make the platform available to developers at a free or near-free price point, or Microsoft just won't be able to build the mind-share and experienced developer base it needs to grow, so I'll continue to keep an eye on this space.

Reblog this post [with Zemanta]