DevOps: You’re doing it wrong

IMG_2105.jpg
IMG_2105.jpg (Photo credit: lambertpix)

Each time a new software development trend appears, people manage to find a way to misinterpret it so that it looks like a shortcut -- typically, because this is far easier than really understanding the idea in the first place.  Invariably, these shortcuts fall flat on their faces -- perhaps contributing to the "trough of disillusionment" in Gartner's hype cycle.  I saw this when people did procedural coding in an object-oriented language and called it OO, and I saw it when people thought they were "agile" just because they had a daily stand-up meeting.  I've even seen it when the stand-up meeting was held sitting down.

Now, it's DevOps.  In much the same way as Agile doesn't mean you don't have to understand software requirements, DevOps doesn't mean that you're getting rid of your Operations team because your developers are doing their jobs for them.  In "DevOps: The Future Of DIY IT?", readwrite points out a Gartner survey that shows that NoSql databases are generally being administered by developers -- not DBA's -- and projects out of this a future in which developers are running everything in production.

There are a couple of gigantic holes in that argument, though.  First, I'd bet that if companies had DBA's that knew the first thing about administering a NoSql database, they'd put them to work.  Instead, these tools are so new and so immature that the tooling is still be invented, to say nothing of procedures.  The second big problem with this over-generalization is that when developers are administering NoSql databases (or any other production systems), they're not developing applications, quickly leading to a very expensive Operations staff.

That ain't what DevOps is all about.

The goal of DevOps is to see developers and operations working together to create a virtuous feedback loop -- just because they start acting like they're on the same team doesn't mean that either of them are going away.

The politics of software

This fall, as healthcare.gov imploded before our eyes, we saw any number of self-proclaimed experts chime in on why it coughed, sputtered, and ground to a halt, and how, exactly, it might be fixed.  My guess is that the answer is more complicated than most of them let on, but I'll bet there's a healthy dose of politics mixed in with whatever technological, security, and requirements issues might have surfaced along the way.

IMG_6589.jpg

It seems somewhat counter-intuitive to talk about politics at all in the context of software development, of course.  One of the aspects of software that really appealed to me when I entered this field was that for most problems, there existed an actual correct answer, and there are no politics in algorithms.  Ah, to return to the halcyon days of simple problems and discrete solutions!

Today's problems are more complicated than ever before, though.  Prodigious capabilities have bred complex systems and murky requirements under the best of circumstances, and no government project operates under the best of circumstances.  For those of you in private enterprise, you surely are aware of the struggles bred of competing interests and limited resources, but in a government setting, all those factors explode.  Funding is rarely connected directly to stakeholders, opinions are everywhere, and "deciders" are nowhere to be found.   Not to put too fine a point on this, but if we were to think of government-sponsored software as having been congealed rather than developed, we might be on the right track.  It's actually a small miracle when these systems work at all, given the confluence of competing forces working to rip projects in seventeen directions at once.

Think back for a moment on the early days of Facebook or Twitter or any of the other massive applications that serve as today's benchmarks of reliability.  They weren't always so reliable, of course.  Twitter, in particular gave birth to a famous "fail whale" meme in 2009 as it sorted out its capacity and reliability issues.  To be clear, twitter operates on a huge scale, but all it's doing is moving 140-character messages around -- there isn't a whole lot of business logic there, short of making sure that messages get to the right person.  It's pretty easy to gloss over some of those growing pains, but virtually every large system has them.  In the case of healthcare.gov, the failures happened under the hot lights of opening night and amid opponents who wanted desperately to see the system fail, and fail hard.

If you work amid politics like this, I'd love to offer a simple solution, but sadly, I have none.  Instead, I'd urge a little empathy; walk a mile in the footprints of developers, project managers, analysts, and testers on projects like this before you criticize too vigorously.  I can assure you that if you think this was a failure with a simple cause, you're mistaken.

Related articles

 

 

Enhanced by Zemanta

Testing – It’s not just a good idea

Straight out of the business section, here's a story from CNN about a small business owner who came up with an innovative algorithm to generate t-shirt slogans.  Failure to test and monitor the output, however, led to a host of horribly offensive slogans, followed by a social media outcry and a blacklisting from Amazon.

Now, the owner of this business says the company is 'dead'.  Don't let this happen to you -- your business could be one headline away from the same fate if you don't manage risks carefully.