Simplicity

Simplicity is good. Simplicity is the source of understanding and effectiveness. Lack of simplicity drives misunderstanding and errors.

This shouldn’t be a big surprise to too many people, but I see lots of people forget about simplicity all the time. That’s really too bad, since there’s such a strong correlation between simplicity and success. Let’s look at some areas where simplicity pays off.

Simplicity in Communication

Speak clearly. Write clearly. Watch for fluffy words that sneak into your writing and obscure your ideas. This is very common advice, but too rarely followed. We’ve all seen the effects of overly complex communication: eyes glaze over, drool dribbles out of the corners of mouths, and ideas fall on deaf ears.

Everyone knows someone who speaks or writes much too verbosely. Entire paragraphs are used where a few words would do nicely. When you find yourself unwinding sentences to figure out where you missed a left turn, you’re a victim of unneeded complexity. When this happens to you, pay attention to your ability to keep up with a conversation and actually get anything out of it. It’s a ton more work as a listener, and when you do that to your listeners, you’re decreasing your effectiveness.

Simplicity in writing doesn’t mean lack of color or imagination. Color drives interest, and interest drives understanding. When you read something from an excellent writer, you’ll see both color and simplicity. Each word seems chosen to achieve a specific intent. It’s as if the writer has a limited budget, and is careful not to exhaust his supply, but in reality, the limited budget it the reader’s patience and attention.

Simplicity in Design

There’s a feeling I get when designing software. Designs often start out or pass through a complex period where things seem like they’ll probably work, but there’s a lot of “noise” in the design. It feels like the design is tough to communicate because it’s incredibly intricate or complicated.

On the other hand, sometimes a design is just dead-on. It’s right – there’s no other way of doing something that could possibly be more right. It’s boiled down to a perfect formula, and everything about it gets easier. It’s easy to understand, easy to talk about, easy to advance through development, easy to test, and easy to document.

I believe that almost all complex designs could somehow be refined to be simple. Often, this is hard work that takes a long time and requires intense analysis. Sometimes it’s not worth it. Marginal complexity doesn’t hurt too badly, so strive to simplify when you need to, and leave well enough alone when you don’t.

Simplicity in Coding

It’s a rare case when less code is worse than more code. Most developers have seen mind-bending recursive code that breaks this principle, but it’s uncommon. Far too often, we see page after page of spaghetti code that’s difficult to debug, maintain, and extend.

If you or any of your developers isn’t familiar with Steve McConnell’s Code Complete, I highly recommend you pick it up and give it a read. A recurring theme in this book is the need for deliberate intent in design and code. Each module, each subroutine, needs to have a purpose and a cohesion. This approach yields simple, efficient, maintainable code.

Refactoring is the process of deriving simplicity out of a pile of spaghetti. If you truly end up with simple code as a result, this is a good thing, but watch out for refactoring being used as a substitute for correct understanding and design up-front. It’s all too easy to dive straight into coding without really understanding what you’re building, and end up having to refactor a couple of times because there’s no design to support your work. Simplicity by design beats simplicity by refactoring ten times out of ten.

Simplicity in Marketing

Quick – what does your company do? If you can’t rattle off an elevator pitch without stopping to think about it first, you are suffering from lack of simplicity in Marketing. If you have a hard time communicating what you do, your customers will have a hard time understanding what you do, and that tends to be bad for business.

The concept of “sound bites” in marketing has a questionable stigma, but truth is that when you don’t have this sort of simplicity of message, you don’t have control over your product and your company. Be sure you can tell the difference between a short message and a meaningful message, though. Short is only good if it means something. Watch out for words like “quality”, “usability”, and “scalability” that are completely hollow unless they’re made specific.

Seek simplicity of understanding above all, and then express this with the fewest words possible.

Simplicity in Vision

Vision is a special form of Marketing. Vision is about imparting understanding of something that you usually can’t lay your hands on. Vision is about seeing a future that you can never quite attain, but will always strive for.

People talk about “shared vision” as if there was any other kind. In fact, a vision that can’t be effectively communicated to others is pretty close to a wasted effort. I’ve seen people ranting about how they’ve got a vision for something and it’s up to an audience to figure out what the vision is. Words fail me in expressing my distain for this position.

Vision, much like Marketing, is about leading a group of people to share a common way of thinking about something. This is tricky under the best circumstances, and damned near impossible if you can’t communicate your vision with clarity. The more complex the idea, the more likely someone will be to get it wrong.

Again, seek meaning, not just low word count. “World peace” doesn’t convey as much meaning as “no nuclear weapons anywhere on the face of the earth.” While Marketing is about driving understanding of something that exists, vision is about driving understanding of something intangible, so it’s more difficult to find meaning. Nevertheless, that’s what you have to do in order for vision to be worthwhile.

What to do when you lack simplicity

Many things aren’t, in fact, simple. Fear not, faithful student – half the battle here is just detecting lack of simplicity.

Knowing that something is complex – maybe even a little too complex – enables you to deal with this deterministically. You can deal with the complexity. You can plan for it. You can decide whether to expend effort to make the thing simple or whether to just let it be.

Knowing that there’s an area of complexity in something, allow for more effort whenever working with it. Communicating design, performing maintenance, and so on will take longer and be less reliable. If you have a complex message, you’ll probably have to repeat it a few times before it sinks in. If you don’t like this, simplify. If you can’t simplify, deal with it. In any event, don’t be surprised by the results, because I warned you.

Once you start paying attention to simplicity and complexity, you’ll find that you can predict and/or prevent a lot of problems. It’s definitely worth consideration if there’s a problem you can’t explain otherwise.

Usability in everyday things

The power went out last night. Several times. In what should have been a fortunate stroke of luck, the outages began at 11:00pm, so I should have been able to just go to sleep and slumber the problem away in ignorance.

Instead, I had a couple of opportunities to observe some usability that wasn't too usable. Keep reading to see how we can learn some design lessons just by looking around.
The first design lesson was courtesy of our Electric Company. As soon as the power went out, my wife felt compelled to call them up and let them know that we were without power. I didn't have the heart to tell her that when the entire west side of town drops off the grid, it's gonna make a noise somewhere. She felt some satisfaction in calling to report the outage because she was Doing Something about the problem.

Lesson One:   Users want to feel in-control, even if they're not. They're driving - not the computer. If you're one of those people (like me) who can't stand riding in a car -- you've got to be driving -- then you understand this idea. Make sure your users feel that they're controlling the experience.

Ok, so my wife's on the phone with the Electric Company, and I can hear them gathering information. Name, address, address again (spelled), address again (spelled as it appears on our bill from them), name again... My wife glowered accusingly at me when I started laughing. It occurred to me that they might be trying to guard against someone fraudulently reporting a power outage , but I think the real issue was that they had a rigid process that wasn't able to absorb unexpected conditions. It would have been a whole lot more efficient to break protocol and get to the point: "Yeah, west side? Right - we've got it. Thanks for your help." Click. Meanwhile, the management center is lit up like a Christmas tree showing faults all over town and they're validating billing information.

Lesson Two:  Expect the unexpected. Sorry, that's a cliche, but it's true. S**t is gonna happen. Design enough flexibility into the system so that the people who are running it can apply their infinite capacity for adaptability and deal with the problem. Do Not become part of the problem.

The lights are out now, and the thunder rumbles soothingly into the distance. BEEEEEEEEP. There's a candle on the nightstand beside our bed, and the cats are settling into their normal spots. BEEEEEEEEP. I lean over and kiss my wife, and think that this might not be all that big an inconvenience. BEEEEEEEEP.

WHAT THE HELL IS STILL BEEPING ??!!!?? ##&;@%$!

Culprit #1 - the UPS on my wife's computer. Easily dismissed by shutting off the load. Culprit #2 - our alarm system. This one stymies me for a bit, but the Cancel button seems to do the trick.

An hour later, the power came back on. Lights come on, fans whir to life. I wake up and start turning off the lights, and then the power goes off again.

BEEEEEEEEP.

Stomp, stomp, stomp. . %^%#@*&!##@. Stomp, stomp, stomp.

This happens three more times during the night. Each time, the cursed alarm system screeches at me, proclaiming that it alone has a reserve of power -- enough, in fact, for it to squander its power by piercing the midnight silence. Thank God the alarm system told me I had no power, because that little fact might very well have escaped me otherwise.

Lesson Three:  Forcing users to acknowledge your "help" is a perilous move - use it sparingly. Consider "Clippy" the paper clip. A good portion of peoples' animosity toward the clip is not the fact that it's smugly offering advice, but that it doesn't know when to leave you alone. When you offer feedback to your users, endeavor to do so without requiring the user to stop what they're doing, deal with your interruption, and then try to figure out what they were doing before you butted in.

It's morning again. Lights are on, and everything's back to normal. And I'm tired.

What’s Good for the Goose …

Last week, Forbes.com ran a great article where they interview Mark Fleury about IBM's recent Gluecode purchase. Mark is a bit miffed that IBM is trying to out-JBoss him. He thinks IBM is trying to put him out of business. In fact, the thought of IBM putting an open source company in harm's way is pretty amusing, but we haven't seen the last of this trend.

Read on for my $0.02 ...Ok, here's the backstory.
There are enough players in this conflagration to make a Tom Clancy novel,
but we can focus on a couple at a time and get a pretty good feel for the
situation. Let's start with Fleury. Here's a guy who started JBoss to
compete with the big boys by giving their software away and charging for
services. Fair enough. He's had some success, including a
>$10m Intel-backed VC round.. Along the way, he's vowed revenge on a group of developers who left to form their own company, and gotten himself and his company in a fair bit of a mess
by faking identities on some online forums
.
I get the impression that this is a guy who pounds a couple too many Red
Bulls every day.
But let's take another look. JBoss showed up as a software release in 1999,
and the JBoss organization became real in 2001. They grew via a
professional services model, and incorporated in 2004. Now you have a
company of some 150+ employees that's bringing in real revenue and hiring
real employees. They no more resemble the code-for-free open-source
stereotype than IBM does. In fact, they have promoted the term "Professional Open Source" to reflect their for-profit stance.
Now, IBM wants in, so they pick up Gluecode. They go on to announce that
they're going to undercut JBoss on services, and JBoss is a little nervous.
So, prediction time. Does IBM drive JBoss out of business? No. But I
think they may drive them into the arms of a white knight.
There are still a couple of gorillas in the mist.
Oracle is one possibility. They haven't made any big moves since picking
up Peoplesoft, and they wouldn't mind extending their position in J2EE by
getting on the hip open-source bandwagon.
A move that would make more sense would be for HP to acquire JBoss. This
would not only boost their professional services image, but would open the
door for some very interesting integrations. Consider JBoss extended to
run .Net software via Mono, and managed by HP management software (see
my earlier predition about .Net and J2EE integration in an app
server
).
You'd have a real heavyweight company getting behind J2EE and .
Net on the same application server via open source, with management that's
ready for real enterprises. All running on HP equipment and serviced by HP
experts. This would squarely hit the need that IBM is targeting: customers
want to get everything from a single vendor, but know that they could
switch if they needed to without disrupting their entire infrastructure.
For a bonus, consider HP picking up EMC and getting VMWare along with it.
Virtualization is getting hot, and HP could manage it along with everything
else.
In the mean time, it's priceless to watch Fleury squirm.

Related Reading
JBoss secures $
10m financing

Mark Fleury
accused of Astroturfing

Open Source Smack-Down

JBoss moves up to business processes

Update: IBM's Gluecode Deal Adds A New Wrinkle To Its Open-Source Strategy

Forum
discussion: Joel on Software on JBoss / IBM

Working in a Small Company or a Big Company

Most of us have had to choose whether we want to work in a big company or a small company. There are clearly benefits of each, as evidenced by the number of small companies trying to get big and the number of big companies trying to act small. But who's right? Who's best? Given the choice, do you want to work for a small company or a big company?A couple of well-known bloggers I follow chimed in and responded on this topic this week. Joel Spolsky started the volley with a post about recruiting, claiming that the environment and resources at his company greatly exceeded those available to your typical Microsoft employee.

In fairness, it's impossible to portray Fog Creek as representative of all small companies everywhere. In fact, one of the great things about small companies is that they tend to be a whole lot more unique from case to case than large companies. Joel does represent the successful small entrepeneur who's able to do some nice things to take care of his employees.

In my experience, these small companies can be extremely exciting places to be when they're working well. In general, they've started with a culture of having to succeed in order to survive, so they understand excellence and they understand execution. A young employee can learn a lot in a place like this.

Same thing goes for hiring. It's important everywhere, but it's vitally important in a small company. One bad apple can really hurt you because every part of the organization plays a critical role in the success of the company. In a bigger company, there are just more resources to deal with marginal cases -- better training facillities to make mediocre employees into good ones, counseling resources to handle problems, and so on. These things just don't exist formally in a small company, so individuals have to either sink or swim.

Availability of resources can be a mixed bag. As both Joel and Robert pointed out, there can be great resources in either environment. I've worked in some small companies, though, that were just a step or two above the stereotypical "garage and a bare light bulb." In general, if you're in a small company and you need something for your work, you get it as long as you can afford it. In a big company, you may find yourself wading through paperwork and months-long waiting lists in order to get something pretty trivial. On the other hand, when the things you need cost real money, there's nothing like the deep pockets of a big company to reach down and pony up the kind of real money it takes to buy serious hardware.

What's my preference?

I really like the energy and environment of excellence found in a small company on the rise. I also really like the resources available in a larger company. Once, I experienced a place where the small-company environment existed (to a degree) in a mid- to large-sized company. Only now do I recognize what a rare thing that was. The company's gone now, but if I ever find something like that again, I'll know enough not to take it for granted.

Generate code from XML

I've been looking at XML serialization / deserialization in VS.Net, and I'm really beginning to like working with the XSD.exe tool that ships with VS.Net. I've found that this is a great way to work from example data and end up with code that can serialize and deserialize directly from that data structure.The XSD.exe tool is found in the Visual Studio install directory in the SDK folder, and it's a command-line tool. It lets you take an XML document and generate an XSD (XML schema) document from it. You can then run XSD.exe again and generate code that works with XML that conforms to that XSD. Of course, you can perform one or the other of these steps if you want (skip generating the XSD if you already have one, for instance).

When I started playing with this tool, I knew very little about XSD schema documents, but the prototyping I've been doing has gotten me up to speed pretty quickly, and I think I'm sold on them now. These docs do an amazing job of defining the "low-hanging" business rules that seem to trip up so many applications. The final proof for me will be to see if I can use these without imparting a "sorry, try again" feel to an application (I'd like to guide users, not slap them down).

So far, I've found that I've been bouncing back and forth from XML to XSD to code trying to get things sorted out the way I want. In my case, I control the XML and the XSD, so I'm adjusting both of them to get a combination I like. It's also been very helpful to load the XML up with the generated code and browse the object structure at runtime. It's helped me see a few structural things that I've been able to clean up. Working through all of this, there are a lot of features in XSD that really help tighten up the definition of the XML, and being able to see the serializable code and browse the runtime objects really helps my understanding of the overall project.

As with any such generation technology, there's a round-trip problem here, so I expect that this is mainly a one-time prototype / jump-start tool. I also found it pretty awkward to use VS.Net to look as XSD files, because it launches a designer that doesn't really give you access to the XSD itself. Despite these small problems, it's definitely worth a look if you haven't seen this before!

Related links:
Learn about XSD on w3schools.com
Online XSD schema validator
MSDN documentation for Xsd.exe
Appdev: Generate web service stubs from WSDL

Clash of the Titans

IBM and Microsoft are squaring off for leadership of Enterprise development. Rational (IBM) is ramping down in fact on .Net support (they’ve been paying lip service only for quite a while), and Microsoft’s Ballmer (“Developers, developers, developers”) is positioning VS 2005 on the high end to be just what Enterprise developers have always needed.

Who wins and who loses here? And how does this affect the existing industry alignment (Microsoft vs. Everybody Else)?

News this week

Historical Background

Over the last few years, IBM and Microsoft have co-existed in relative
peace because they've concentrated on different markets. In the last year
or so, though, there's a definite "ain't enough roon in this town for the
both of us" feel to this landscape. IBM, whose stronghold has always been
big, big business, has made quite a bit of noise about breaking into the
SMB market. Microsoft, on the other hand, has been working hard on
building enterprise features into Whidbey, its next development platform.

The last time IBM and Microsoft really squared off against each other in a
meaningful way was when OS/2 and Windows were duking it out. In that
battle, IBM appeared to go quietly into the night, preferring to harvest
the more profitable pastures of hardware sales and high-margin services.
As the dust settled, they found themselves owning some big-company high
ground (good), but with very little platform or developer penetration (bad).

This was the point where they jumped big-time into Linux and Java support,
which made a good deal of sense for them. After all, they'd be able to
make a good living on services following this strategy. In addition, if
they couldn't own developers, they wanted to make damn sure that Microsoft
didn't either. Snapping up Rational added more fuel to this fire, fitting
well with the Java development and big-business focus. The final piece of
the puzzle for them was Eclipse, started as an IBM-sponsored open-source
Java development IDE. Ironically, in order to really woo the open-source
community, IBM had to relinquish control of the project, which has gone on
to be wildly successful.

So, IBM finds itself with a strong enterprise presence and strong ties to
Java and open source development. They have, however, no direct control
over a big-time language, OS, or IDE. They have an app server and a
database that are both respected but certainly not dominant. Finally, they'
re still outside looking in at the SMB market.

Microsoft, on the other hand, has owned the SMB market for as long as
there's been one, and they continue to advance their well-respected IDE.
They own the IDE and languages (C#, VB.Net), and they're chasing the
problem IBM had with eclipse: people are afraid to commit to platforms
owned by one vendor.

Microsoft has been vulnerable on the hobbyist programmer front relative to
open source platforms like LAMP (Linux, Apache, MySQL, and PHP/Python).
These platforms are free and it's easy to find cheap hosting with this
environment already set up, creating an uphill battle to attract weekend web
warriors.

In the enterprise, Microsoft has been losing to Java again because people
are afraid of vendor lock-in. Sun, HP, BEA, IBM and others have been quite
successful convincing big-time IT tha the only way to ensure scalability is
with a J2EE software stack on *nix. The apparent portability from one
vendor to another has also helped create the perception of safety.
Microsoft just hasn't had an answer to this, and going proprietary with .
Net didn't help their cause at all.

Today's Playing Field

Looking at this week's announcements, IBM can clearly be seen to be
retreating, while Microsoft is in attack mode. Microsoft's biggest
vulnerablility continues to be its single-platform direction, though there
are some signs of softening (cooperation with Sun, discussions with RedHat).
If Microsoft would publicly get behind cross-platform .Net efforts like
Mono and Mainsoft, I think there would be a big boost to .Net adoption as
companies see vendor and platform options become available.

IBM continues to shoot itself in the foot by taking on Microsoft directly
and even by poking at its most loyal developers (IBM created an uproar in
the Java community by showing interest in PHP; Java developers felt that
this was a show of no confidence in Java). Product offerings from IBM
continue to be disjointed and confusing, contributing to the perception
that IBM's products exist only to sell IBM's services.

The wild cards right now are vendors like Oracle and BEA, who have been
reasonably quiet in recent months. BEA just began a major campaign to
launch their next-generation SOA platform, which claims to be the optimum
platform for SOA regardless of language. Oracle is still digesting
PeopleSoft, but appears to be interested in taking the enterprise from the
applications front rather than the development front.

Predictions

As IBM's grip on enterprise development loosens, support for Java will
become more fragmented. The strong, widespread support for Eclipse ensures
that Java is in no danger of disappearing any time soon, but there's also
no sign of any unifying force for Java as a whole.

Although UML has never been really dominant (even IBM says that only about
6% of developers use UML), Microsoft's continued upward pressure will spell
the end of UML as a mainstream modeling tool. Rational will become
marginalized as a boutique vendor.

Finally, the big one. As BEA continues to position itself as the platform
to run the enterprise on regardless of language, they will integrate .Net
CLR support into WebLogic, perhaps through a derivative of Mono. This is
viable for them because they're already acknowledged as a leader in the
J2EE space, and nobody seriously considers Microsoft to be an enterprise
application management platform.

Although some have considered the possibility of a Microsoft/BEA
acquisition, it's actually more valuable for Microsoft to have BEA
establish .Net support while still independent. As an independent company,
BEA can relieve the single-vendor lock-in problem that's limiting Microsoft
today, facillitating much more rapid .Net adoption in the enterprise. As
the first application server to support J2EE and .Net, WebLogic will surge,
forcing Websphere and JBoss to follow.

There you have it - remember, you heard it here first!

Introducing Predictions

The high-tech field has never been at a loss for news. In my years as an IT professional, I have seen the change from mainly print-based (magazines, newspapers) to mainly electronic (newsletters, web sites). As distribution became cheaper, the volume of news has exploded, to the point where the volume of information that's available is overwhelming.

Though we now know more than ever before about what's happened, we're in no better position to predict the future -- until now. I've always had a talent for spotting trends and momentum in our industry, and I'm going to let you in on what I'm seeing. Watch this space for news you can really use.

Beware of long answers

I think everyone's experienced difficult conversations in which you have to drag information out of someone. If you've ever had any training or experience in requirements gathering, sales, or other interviewing, you've also learned about open questions vs. closed questions. In my experience, this is almost always focused on drawing additional conversation out of someone who doesn't want to be conversational. There's another side to this phenomenon, though - the person who takes a yes-no question and fires back a wilting soliloquy.Some people call this doubletalk. Some people call it rambling. Many people don't even realize it's happening -- it's the long answer to a short question.

I've begun to pick up on this more quickly than I have in the past because I'm seeing it in a few people on a regular basis. I can now spot the "long answer" almost immediately, just by paying attention to the nature of the question. There are some questions, of course, that should be able to be answered in one or two words ("Yes", or "This week."). When your answer doesn't fit in that sort of box, it stands out vividly.

I've seen two forms of the long answer, and these can be difficult to tell apart. The first is just rambling. This detracts from effective communication, but it's not really too harmful. You can identify this answer because there is actual information in the answer -- it's just spread out across a whole lot of useless dialog. Watch for this from engineers who provide exhaustive detail when it's not necessary (if you're an engineer, pay attention to your own answers - it's easy to do this).

Thei second form of "long answer" is true doubletalk or evasion. This is best detected by remembering the question that was asked and trying to match the answer to the question. If someone manages to change the subject of the conversation in the first three words of their answer, they're deflecting attention from the original question. You can sometimes spot these in advance of the actual answer, because the question will demand a commitment in the answer:

  • When will you be done?
  • How much will it cost?
  • Have you documented the requirements?

The classic form of doubletalk is the political debate. Politicians are well known for their ability to talk on and on without saying a thing. Though it's less commonly recognized, this sort of thing happens all the time in business. When it's excessive, it's easy to spot, but very often, it escapes just under the radar. Again, keep the question in mind as you're listening to the answer, and see if the question is satisfied.

In order to become attuned to the "long answer", try watching for it in other people's conversations, and then watch for it in your own answers. Here are some occasions that may yeild examples:

  • Staff meetings.
  • Sales calls.
  • Job interviews.
  • Design reviews.

Armed with a little knowledge, you can make a lot of sense out of the context of some of these conversations, and the results can be enlightening!

Process isn’t just for “normal” projects

I've been working through some interesting process issues with my employer's CTO and head of Product Management. The thrust of these discussions is that we've revised our Product Planning and Product Development processes, and I'm currently working on documenting what we've agreed upon. No sooner had we come up with a plan, however, than a "highly important" project sprung up, prompting discussions about suspending parts of our process because this project was so important. I've managed to stop my head from spinning long enough to gather some thoughts....

Origin of Process Changes

I've grown my current development team from nothing, and grown our process along the way to fit the organization. Our informal process guide has always been "just the right amount of process for our needs... and nothing more." In our most recent revisitation of process, I worked with our CTO and Product Manager to address schedule variations due to requirements problems.

In short, we were entering a release with only the most vague sense for requirements for features in that release. In the past, this had resulted in schedules not being met because we didn't really understand what we were supposed to build until we moved a portion of the way through the release. I'd suggested better requirements a number of times (citing Joel Spolsky's excellent guielines), but this was written off as too much work.

Prototyping, it was agreed, would be the best compromise for our situation, allowing us to build out more detailed requirements iteratively. We tried this in a mini-release, and though we had some minor problems, it was deemed to be generally workable. A side effect of this process is that our release date couldn't really be known until we moved through about 1/2 of the release. Prior to that, we had a target and a +/- range for our date, but nothing one could take to the bank.

Screeeechhh!!!

It was about this time, of course, that the Important Release reared its head. We needed to figure out a date for the release *right away*, which was the first violation of our process. Nevertheless, I came up with a date based on similar work and similar releases we'd done in the past.

The date wasn't good enough.

Thus began the further erosion of process: "why can't we skip parts of the process and speed up the release?" "I know this isn't our normal process, but this isn't a normal release -- this one's *IMPORTANT*".

There was clearly a whole raft of concepts that just never managed to lodge securely in the nooks and crannies of these executives' minds. These are a couple of pretty smart people, so I can't chalk this up to lack of cognitive ability. There's something here that these guys really don't get, and I can't account for it.

The Missing Link

In the spirit of "we hold these truths to be self-evident", I'm adding here the rationale I've used to establish process. It seems to make sense to me, but as my recent experience has shown, it can't be taken as a given that this makes sense to everyone. I've seen this widely chalked up to "some people don't get it", and I have to echo that sentiment at this point.

Our software development process, like most, is designed to move us through a release as expediently as possible. It's not my habit to put process in place to make us go more slowly. Unfortuneately, this is a nuance that's frequently lost on the casual observer. Process, it's felt, is overhead. Cut the process, and you'll go faster.

Cover of

Steve McConnell's Rapid Development has a great passage about a software team from Ernst and Young who attended a programming contest and used process to great effect. They appeared to get out of the gates more slowly than everyone else, but their process made their progress so reliable and relentless that they quickly overtook everyone else. They ended up finishing second because of a breakdown caused by skipping part of their process! (They ended up using the whole process at the next event, and won it).

If process didn't help us, we wouldn't use it. Therefore, any process we've got is here because it makes us go faster. Simple idea, but frustratingly elusive when it comes to getting non-developers to internalize it.

I tried another example with seemingly little effect. Imagine someone who spends lots of valuable time coming up with a disaster recovery plan, and then when a disaster hit, the first thing they do is reach for the disaster recovery plan and chuck it in the trash. "I don't have time for a plan," they'd say -- "We have an emergency on our hands." Most people, I think, would say that they deserve whatever fate befalls them.

Not so with a software development process, though.

I'd like to hear your impressions and ideas. Register and post your comments if you've dealt with this.

Enhanced by Zemanta

System Admin checklists

Last week, I had an interesting discussion with my Sys Admin that left me boiling a bit. I ended up needing to find an article I thought I'd bookmarked a long time ago to prove a point to him, and ended up having to search to find it. This is another "gem" you'll want to keep handy -- if you're a Sys Admin, this is a must to read and understand, and if you're not a Sys Admin, it really helps cut through BS at those times when you feel that things just shouldn't be working the way they are.Our Sys Admin is the nicest guy in the world, but to say that he's disciplined would be to do a great disservice to the term. In fact, he's a "squeaky wheel" guy, addressing issues just as they're starting to get really annoying, so my conversation with him shouldn't have surprised me. Nevertheless, I was surprised to learn that we had not, to date, been monitoring the event logs on our Windows servers. At all.

After I turned three shades of purple, I tried to express my amazement that you can't get software to monitor event logs free in a box of breakfast cereal. What I really wanted to have handy, though, was an article I'd read years before explaining very simply what a good Sys Admin should do daily, weekly, monthly, and so on. I just knew that monitoring event logs was in there.

I ended up spending a few minutes trying to dig this up, but I finally found it again: NT system administrator's checklists