End resume-driven design

How many times have you seen the latest technology injected into a project, used for the duration of a feature / release, and then left to whither?  I've seen it more times than I'd like, and it's got to stop.

Don't get me wrong.   I love new technology -- a lot.  I love learning how it works, and I love to figure out where it's appropriately used.  I also know that you can't just throw new technology into a software project without a plan, and yet I see it happen over and over.

Abstract Technology architecture parabolic Windows
Image by Wonderlane via Flickr

Last week, I saw someone try to shove Entity Framework into a project on the sly, as if nobody was going to notice.  Chaos ensued when this plan blew up, and repairs are now being made.  The bungle itself was alarming, but I was even more disturbed to reflect on how many checks and balances were already blown before the right people learned what was going on, and why it was a bad idea.

This is a failure on multiple levels.

First, developers themselves should know better than this.  The reason EF was chosen in this case was nominally because it was supposed to help the team deliver a feature more quickly.  As developers, we've all seen this argument fail dozens of times, and yet we fail to learn our lesson.  New technology certainly improves our craft over time, but the first time we use any new tool, we run into teething problems.  If we grab a brand-new, shiny box of tech goodness off the shelf and honestly think that it's going to work perfectly the first time we plug it in, we should be hauled out back and bludgeoned.

Next failure: architectural guidance.  In this case, there exist architectural standards that cover data access, so at first glance, it would appear that this is an open and shut case.  In practice, though, the standards are very poorly socialized, and they're badly out of date.  In short, they have the appearance of not being relevant, so it's easy for developers to discount them.  Architectural standards need to be living and breathing, and they need to evolve and grow so that designs can adopt new technologies at a measured pace.  To do less than this is to erect a static roadblock in front of developers.  The developers will drive around it.

Finally, management allowed this failure in a couple ways.  Early in this process, a dysfunctional conversation occurred.  Management needed a feature by such-in-such date.  Development thought this wasn't nearly long enough.  Wringing of hands and gnashing of teeth ensued, and eventually, the developer capitulated, claiming that we could make the date, but only by departing from our normal development standards and using this new tech tool instead.   Some form of this conversation has been responsible for more software disasters than I could count.

Follow The Cops Back Home
Image by David A. Pinto via Flickr

No matter how much time we put into defining our processes, no matter how many years of experience we draw upon, and no matter how many times it's proven that shortcuts kill, we keep getting suckered into them.

Personally, I draw a couple of conclusions from this.  First, we just need to have a little more personal integrity and discipline.  That's sort of a cheap shot, but it's true.  The second conclusion, though is more of a reminder to us as an industry: if we're so desperate that we'll take long shots like this, despite the odds, then the state of the industry must be pretty bad, indeed.  As an industry, we need to acknowledge that we're causing this sort of reaction, and we need to find a way to be more productive, more reliably.

But not by throwing out the process just when we need it most.

Reblog this post [with Zemanta]

Feet on the ground

Here's your free management tip for the day: get out of your chair and go see what's happening on the floor.

Every summer, I go to Boy Scout summer camp with my son.  Although this passes for vacation, it invariably ends up being a management clinic.  You might think you see where I'm going with this: chasing 30 Scouts around is just like chasing developers, right?

Boy Scouts enjoying Summer Camp
Image by Max Wolfe via Flickr

But that's not actually what I had in mind.  My real job at summer camp is to remove obstacles for the kids.  They're at camp to earn badges, and it's amazing how many trivial problems pop up and totally confound either the kids or the camp counselors, who are also kids.  Left to their own, both the Scouts and the counselors can be expected to churn on these problems, waiting for them to fix themselves.

I've seen counselors, for instance, find themselves without some tool or supply they need to teach a class, and they'll just wait for the tool fairy to drop by and bestow upon them a brand-new left-handed clipboard, or box of triple-gussetted ziplock bags, or whatever they're missing.  In the mean time, Scouts sit idle because they can't get their work done, and badges are not earned.

Other problems occur as well: sometimes there are disruptive students; sometimes the counselors don't know the material they're supposed to teach; sometimes kids aren't paying attention to work they're supposed to do outside of "class" time.  In all these cases, though, if you're sitting back at camp asking kids how they're doing, they'll all tell you things are great -- right up to the end of the week when they don't get their badges.

Now, it's true that some of these problems would eventually sort themselves out, but probably not in time for most of these kids to catch back up again.  I consistently find that if I spend time checking out classes myself, I can spot problems and take them to someone who can help, and we can get small issues turned around before they become large issues.

What's the lesson here for a software manager?

First, I'm not suggesting that you hover behind someone's desk so you can swat them when they stop coding.  I'm talking about finding the real obstacles to effective development and addressing them.

You might find that your developers need better equipment.  You might find that the requirements they're working from are woefully inadequete.  You might find that servers they depend on are slow or frequently down.

But don't limit your observations to developers only.  Check out the help desk.  This is one of the best sources for information about how your software is really performing.  It's one thing to read about a problem in a help desk ticket, but it's another thing altogether to observe your customers' emotional reaction to problems with your software.  Similarly, watch your software in use by real customers, or by your operations team, and you'll gain a new appreciation for real-world usability and performance.

When there's something wrong with your software, you'll probably learn about it eventually through your TPS reports, but you can keep an awful lot of small problems from becoming big problems by getting out there and looking around.

Reblog this post [with Zemanta]