Architecture is a tough sell. It rarely is linked directly to the delivery of a feature, much less is a feature. And without that tie-in, business leaders won't approve the time and expense to build or improve your architecture.
Of course, as software professionals, we understand why architecture is important. You can build almost any individual feature without investing in any meaningful architecture, but it probably won't work well, and it's certain to be difficult to maintain. A good architecture can make development proceed more quickly, make it more likely that you'll hit your delivery dates, and make maintenance far easier.
Architecture is the BASF of software, then, isn't it? ("We don't make the software, we make the software better...")
No, that's not going to work very well, either.
In an earlier post, I pointed out a thread on one of the Joel on Software forums where a poor soul was trying to justify the purchase of some software for development, and his managers were having none of it. He asked if he could get by with free software for his work, and the answer was a qualified "yes" - even for .Net development.
The big problem here, in my opinion, was the value placed on software by his managers. After all, if he couldn't justify a couple hundred bucks for software, how in the world would he be able to justify the hours he'd spend doing the work?
Of course, I pointed out Steve McConnell's excellent article on Technical Debt. This remains the single best business-oriented explanation of software architecture as it relates to business goals, IMO. Even this, though, seemed not to have really sunk in, so I attempted to cast some light on the subject with a little business mis-management humor. It went something like this:
I think it's pretty common for a company to spend money on capital expenses where management can see a direct connection to production and revenue. These companies will go out and spend millions on a new wangdoodle press, but they won't pony up for software.
If this is the case, then there *may* be hope - management needs to become educated that software that's developed, as well as software that's purchased, is a capital expense. The software is expected to provide value for a period of time (as defined by the depreciation schedule), and at the end of that schedule, it's going to be working about as well as a wangdoodle press at the end of its useful life.
Now, any wangdoodle press expert will tell you that you can keep one of those beasties running for years and years after their useful life is supposed to be over, but at what cost? After all, you're replacing hornswaggle belts just about every week, and every time you turn around, you've got to shut the machine down to recalibrate the whosiwhatsit clearance.
When the wangdoodle press is down, you know, there are no wngdoodles to sell, so revenue is impacted. So, in addition to the cost of parts and the fact that you're constantly paying a premium for the wangdoodle repair guy to make emergency fixes, there's an opportunity cost associated with running this old machine.
If your managers get this, then they might be able to understand that software is sort of the same.
If, on the other hand, your managers have rusty wangdoodles, then you're SOL. Sorry.
Almost on cue, a couple of weeks later, another thread on JOS popped up, questioning the validity of the whole concept of technical debt. This one is worth reading top to bottom, because the simple fact that so many software professionals are all over the board on this topic illustrates how elusive this concept is, and how easy it's going to be for any given business manager to miss the concept altogether.
Once again, I'll reprint a few of my comments here:
[T]he idea of turning software practices into a financial transaction is to account for the difference between a system built well and a system full of short cuts.
A premise here is that a lot of managers think that if a well-build system (architecturally sound, documented, TDD - all the stuff you employ to build better systems) costs "X", then you can build the same system by short-cutting documentation, unit tests, good architecture, etc., for something less than "X" - say 0.8X. I know that we'd all argue the truth of that premise, but most of us have worked for managers who believe it at some point.
The concept of technical debt says that the remaining 0.2X of cost doesn't just go away - it becomes debt. You're going to pay for it later, when you support, maintain, and enhance the system, all of that work becomes more difficult because you didn't do the job right in the first place. And, like financial debt, there's a carrying cost to technical debt that won't go away until you service the debt or dispose of the asset.
When the business decides to acquire software, they therefore decide to either fund the whole price (paying for proper construction) or to fund a portion of the software and take on a measure of technical debt equal to the costs they're cutting from development.
I understand that there are a lot of details that don't fit this model, not the least of which is the bit about speeding development by short-cutting good software practices. I don't believe that anyone's suggested that technical debt can be tracked so precisely that it goes onto a balance sheet (though there are far more ridiculous things on balance sheets already). This is a metaphor to help non-technical managers understand how technical decisions create business impacts.
Is this horse finally dead? I really doubt it. If anything, I can see how immature our practice is, and how fractured we are as a group of professionals. If we, as software developers, can't come together to understand, embrace, and communicate something as fundamental as technical debt, it's going to be a long time before we can expect our bosses and our bosses' bosses to do the same.