You may be familiar with the old joke about the chicken & pig’s relative contributions to breakfast — the chicken, of course, being involved by virtue of providing the eggs, and the pig being committed vis-a-vis the bacon. The origin of this saying is now lost to antiquity, but it has been adopted as an illustration of dedication by sports personalities and business coaches because it manages to capture these relative levels of engagement succinctly and powerfully.
I found myself reaching for the chicken & pig business fable this week in the context of Product Ownership. We’ve got a back-office product that’s just not getting a lot of love from the business — lots of people who want to provide input, but nobody who’s interested in taking ownership.
This isn’t unexpected or unreasonable. Product Management as a professional discipline is still largely nascent. On top of that, initiatives that raise the top line are always an easier sell than back-office cost-center programs. For these reasons, I’m not sure that accounting, billing, document-management and other cross-cutting infrastructure-like programs are likely to lead the way in agile adoption or digital transformation, but transform they must — eventually.
Reflecting on this article, we’ve done this sort of thing before. Going all the way back to CICS to run terminal applications on mainframes, we’ve separated UI structure from behavior. Microsoft Access had its forms, which propagated to Visual Basic, and eventually to .Net, WPF, XAML, and so on. Static is easy, and frankly, it works pretty well most of the time, but as UI behavioral needs become more sophisticated, these static structures are ill-equipped to handle those needs.
So, I’m skeptical these techniques are going to put HTML out of business anytime soon, but in a dynamic application, they make a boatload of sense.
Keeping abreast of technology updates has always been a formidable job. As always, there are all sorts of options for consuming the fire hose of news. For me, RSS feeds (I use Feedspot for my reader now) from some trusted sources remains a favorite, and I’d like to share a couple of gems that have been really fantastic for a number of years now.
Both of these are design-oriented sites, focusing on UI technologies, techniques, and reviews as well as design theory, layout reviews, and the like. Given my typical focus on back-end design and architecture, these sites are a great way to bolster by front-end perspective and give me some tools to jump-start UI work. I’ve been following both of these sites long enough that I honestly can’t remember where I found them, but I’m really impressed with the track record for both.
Honkiat.com (odd name, I know, but it’s good stuff) is a real grab-bag of articles, so be prepared to filter out some topics you may not be interested in. If you sift through to topics that are of interest, though, there are some really good resources. In most cases, these articles are annotated link collections, so don’t expect to find source code here. Do expect to see examples of UI tools and technologies put to use with links to source sites where you can learn more. A quick flip through recent articles shows topics like these:
Smashing Magazine tends to cover fewer topics more deeply, with more of a focus on their own content, vs curation of content from other sources. This is a great source for tutorials and backgrounders, and they’ve published a number of books based largely on rolled-up content from the site. Again, not everything will be of interest, but the quality of the content that’s here is really high. Here’s a quick sampling of some recent articles from Smashing Magazine:
I’ve recently had occasion to see the same challenge pop up in a couple different settings — namely, bridging the gap between agile development and integration into an enterprise. In both of the cases I saw, development employing agile practices did a good job of producing software, but the actual release into the enterprise wasn’t addressed in sufficient detail, and problems arose – in one case, resulting in wadding up a really large-scale project after a really large investment in time, energy, and development.
Although the circumstances, scope, and impact of problems like this vary from one project to the next, all the difficulties I’ve seen hinge on one or more of these cross-cutting areas: roadmap, architecture, or testing / TDD / CI.
Of the three of these, roadmap issues are by far the most difficult and the most dangerous of the three. Creating and understanding the roadmap for your enterprise requires absolute synchronization and understanding across business and IT areas of your company, and weaknesses in either area will be exposed. Simply stated, software can’t fix a broken business, and business will struggle without high-quality software, but business and software working well together can power growth and profit for both.
Roadmap issues, in a nutshell, address the challenge of breaking waterfall delivery of enterprise software into releases small enough to reduce the risk seen in a traditional delivery model. The web is littered with stories of waterfall projects that run years over their schedule, finally failing without delivering any benefit at all. Agile practice attempts to address this risk with the concepts of Minimum Viable Product (MVP) and Minimum Marketable Feature (MMF), but both are frequently misunderstood, and neither, by themselves, completely address the roadmap threat.
MVP, taken at its definition, is really quite prototype-like. It’s a platform for learning about additional features and requirements by deploying the leanest possible version of a product. This tends to be easier to employ in green-field applications — especially ones with external customers — because product owners can be more willing to carve out a lean product and “throw it out there” to see what sticks. Trying to employ this in an enterprise or with a replace / rewrite / upgrade scenario is destined to fail.
Addressing the same problem from a slightly different vector, MMF attempts to define a more robust form of “good enough”, but at a feature-by-feature level. In this case, MMF describes functionality that would actually work in production to do the job needed for that feature — a concept much more viable for those typical enterprise scenarios where partial functionality just isn’t enough. Unfortunately, MMF breaks down when you compound all the functionality for each feature by all the features found in a large-scale enterprise system. Doing so really results in vaulting you right back into the huge waterfall delivery mode, with all its inherent pitfalls.
In backlog grooming and estimation, teams look for cards that are too big to fit into a sprint — these cards have to be decomposed before they can fit into an agile process. In the same way, breaking huge projects down by MVP or MMF also must occur with a consideration for how release and adoption will occur, and releasing software that nobody uses doesn’t count!
Architects and developers recognize this sticking point, because it’s the same spot we get into with really large cards. When we decompose cards, we look for places to split acceptance criteria, which won’t always work for enterprise delivery, but with the help of architecture, it may be possible to create modules that can be delivered independently. Watch for that topic coming soon.
Armed with all the techniques we can bring to bear to decompose large-scale enterprise software, breaking huge deliveries into an enterprise roadmap will let you and your organization see software delivery as a journey more than as a gigantic event. It’s this part that’s absolutely critical to have embraced by both business and IT. The same partnership you’re establishing with your Product Owners in agile teams has to scale up to the whole enterprise in order for this to work. The trust you should be building at scrum-team-scale needs to scale up to your entire enterprise. No pressure, right? Make no mistake — enterprise roadmap planning must be visible and embraced at the C-level of your enterprise in order to succeed.
Buy-in secured, a successful roadmap will exhibit a couple key traits. First, the roadmap must support and describe incremental release and adoption of software. Your architecture may need attention in order to carve out semi-independent modules that can work with one another in a loosely-coupled workflow, and you absolutely need to be able to sequence delivery of these modules in a way that lets software you’ve delivered yield real value to your customers. The second trait found in a roadmap of any size at all is that it’s nearsighted: the stuff closer to delivery will be much more clearly in-focus than the stuff further out. If you manage your roadmap in an agile spirit, you’ll find that your enterprise roadmap will also change slightly over time — this should be expected, and it’s a reflection of your enterprise taking on the characteristics of agile development.
Next up, I’ll explore some ways architecture can help break that roadmap into deliverable modules.
Last week, I was reminded of a lesson I learned from one of my mentors many years ago: when you’re regression testing, “different” is all you really need to look for.
A little context would probably help, here. Nearing the end of a sprint, we’d tested the new features in the sprint, but we really weren’t sure whether we had good regression coverage. Testing from prior sprints had part of the answer, but we couldn’t really afford the time to run all the tests for this sprint and all the tests from our prior sprints. Thus, the conversation turned to regression testing, automation, and tactics to speed it all up a bit.
The naive approach to regression testing, of course, is to run every test you’ve ever run all over again, checking each value, examining the dots on the i’s and the crosses on the t’s. It’s absolutely irrefutable and in the absence of automated tools, it’s also completely impractical. With automated tools, it’s merely incredibly difficult, but fortunately, in most cases, there’s a better way.
Segue back to my years-ago epiphany, in which I was struggling with a similar problem, and the aforementioned mentor gave me a well-aimed shove in the right direction. He pointed out that once I’d achieved “accepted”, and thus, “good”, all I needed to look for when regression testing was “different”. All by itself, this didn’t do much to help me, because looking for “different” still sounded like a whole lot of looking. Combined with the fact that our application was able to produce some artifacts, however, this idea vaulted our testing forward.
Our application, it turned out, supported importing and exporting to and from excel files, so we were able to use this to radically speed up the search for differences — we saved an export from a known good test, and future tests would just import this file, do some stuff, and export it back out again. At this point, a simple file compare told us whether we’d produced the same output we’d previously validated as “good”.
Armed with this technique, we began looking for other testable artifacts — to the point where we built in some outputs that were used only for testing. At the time, it was time well spent, because the maturity of testing tools made the alternatives pretty prohibitive.
And what do you do about a difference when you find one? First reactions notwithstanding, it’s not necessarily a bug. A difference could arise as a result of an intended change, so it’s not at all unusual to be able to attribute a difference to a change you’re actually looking for; in which case, you throw out the old baseline and replace it with a new one. Just remember you’ll need to explain each difference every time you find one, which brings me to a final issue you’ll want to sort out if you’re going to use this technique.
Any artifact you want to use in this way must be completely idempotent — that is, when you run through the same steps, you must produce the same output. This seems like a gimme, but you’ll find that a lot of naturally-occurring artifacts will have non-idempotent values like ID’s, timestamps, machine names, and so on — you’ll need to address these in order to have artifacts you can use effectively for testing.
Once you get past this little hurdle, you’re likely to find this idea powerful and super-easy to live with, and what more can you ask for in regression testing?
I spoke with a software manager recently who’s considering selling some software that’s working great for their company. “We’re about 90% done,” he mentioned, with the implication that he’s nearly ready to unleash his creation in the marketplace, and I immediately recalled one of my earliest lessons in “product”, and one that’s echoed over and over through the years.
When it comes to productizing software, “done” is usually just a good start.
The prototypical software-to-product path starts with an application that’s built to solve a business problem. Once the software is proven, the leap to selling that software seems like a minor increment.
If you didn’t build productization into your work from the onset, though, you’ll need to address a whole host of concerns. Not all of these will apply to each new bit of software, but if you don’t have all this support in your company, you’ll need to develop them.
Branding. If you’re a widget company, you can be the best widget company in the world and still find it difficult to sell whosiwhatsits. Don’t count on a lot of umbrella support for a software product unless your main business bears a strong resemblance to that software, and if it does, be prepared for nervousness among your prospects – they’ll be (rightly) worried about you gobbling up their businesses once you’ve sold them your software. Branding, marketing and sales are first-class concerns if you’re really going to sell software.
Support. This is huge — your life will change as soon as you sign a customer to a software deal. You’ll need more than a guy to man emails and phones — you’ll need to be able to replicate problems your users are seeing, help them use your software to solve problems you’ve never encountered, and educate them on all aspects of your platform. If your software features any integration, be prepared for many multiples more demanded of you and your team.
Maintenance. Somewhat related to support, you’ll need a plan to maintain software over time if you’re going to keep your customers happy. This is no accident; your ability to release updates and new software versions across a number of customers can only occur with careful planning and precise execution.
Each of these needs is worthy of more pages of discussion, but suffice it to say these are all contributing reasons why software products are so much more challenging than internal applications. In my opinion, this also brings greater reward.
In my 20+ years in software, I’ve seen plenty of changes. Exponential growth in computing power combined with maturation of tools, techniques, and frameworks has driven an inexorable march forward. Most of these changes take the form of incremental improvements and industry trends. Even though change is rapid compared to a lot of fields, it remains linear and gradual. You might already be reading articles that look back at things we’ve seen in 2014 or look forward to trends expected for the coming year — a byproduct of this sort of change is that it’s most visible when viewed over a period of time.
But every once in a while, there are quantum events that redefine the landscape of technology — transcending technology, in fact, to become part of our popular dialog. The introduction of Windows 95, for instance, or the announcement of the first iPhone. There are a handful of these events over the years where our landscape changed overnight.
No i-device was announced today, though. No new software empowered millions of users to be able to accomplish something they couldn’t have dreamed of yesterday. No, today, we saw an announcement from the FBI: For the first time, a case of truly harmful cyber-terrorism has been linked to a nation.
I’m talking about Sony, of course, and the hacking incident we’ve all been watching for the last few weeks. In case you’ve been living under a rock, the incident began with a massive hack of Sony’s systems, compromising emails, private information, intellectual property (including movies), and more. The hackers almost immediately demanded that Sony cancel distribution of their upcoming film, “The Interview”, leading to early speculation that the attacks originated from North Korea. The hackers continued to rain blows down on Sony until Wednesday, when Sony finally capitulated by pulling plans to show the movie.
Today’s announcement, though, while widely anticipated, signals a chilling new reality for all of us — not just those of us directly connected to technology. I believe we’ll look back on the events of the last couple weeks as an awful watershed moment. Everything just changed in a very scary way.
Recall the days immediately following the 9-11 attacks: I think the overwhelming majority of Americans were enthusiastically behind the idea that America needed to find someone’s butt to kick, and kick hard. As a global citizen, not to mention a member of the U.N. Security Council and the last remaining superpower on Earth, though, we as a nation were paralyzed by the fact that we couldn’t find an actual nation to declare war against. In lieu of a single well-defined war, we wound up heavily involved in military operations that still haven’t seen a true conclusion, and I don’t think most Americans feel like we’ve seen the sort of “closure” we’d have liked. It turns out that chasing terrorists is a little like trying to nail snot to a tree.
The Sony hack is a little different, though, if it’s true that this act originated with the North Korean government. It’s true that we’ve seen any number of highly suspect electronic invasions over the last few years in which nations (most notably, China) were suspected to play a part. In these prior cases, though, the incursions were typically more properly considered espionage, and the nations suspected of these activities were just careful enough that nothing “real” ever stuck to them. The Sony case, then, is a scary new milestone.
There’s lots of dust left to settle in this case. President Obama, for instance, has promised that action will be taken against those responsible for this attack, but we don’t know yet what form that might take. It’s unclear how we’d expect a State to retaliate against another State for actions directed at a corporation, as well. This is truly uncharted territory. Not uncertain, though, is the impact of this incident – I’m quite confident we’ll remember this week well into the future.
Software developers are a clever lot, and prone to bouts of creativity every once in a while. It turns out these are essential traits when building software, but cleverness also needs to be must be tempered when it impacts the user-facing parts of your software.
Case in point: Bugzilla’s search results page. This is what happens when you try searching for something in Bugzilla and it doesn’t find any results. It’s supposed to be funny — the misspelling is, in fact, intentional.
But it’s not funny. It’s really, really not funny. It’s not funny for two very specific and very important reasons.
Reason 1: Usage context. If any Bugzilla user ever sees this message, it’s because he failed at the task he was trying to accomplish. Since the message exists solely to explain to the user that he failed, it’s pretty reasonable to assume that the user might not be in the best of moods already. I know I wasn’t.
Reason 2: Product context. If you’ve not already had the pleasure of using Bugzilla, let me fill you in on its search capabilities: they suck. Like, searching in Bugzilla is not only unpleasant, it’s also unfruitful way too often. It’s the best reminder you’re likely to see about why Google won the search wars — it’s because everything else used to work like Bugzilla. So when your (otherwise excellent) product has a critical flaw, such as searching in Bugzilla, it might be best to not choose that specific part of your product to try to crack a funny. Just sayin’.
The lesson in all this? Somewhere on any product team, there needs to be a voice of reason who’s looking at context stuff like this and deciding when it’s time to be funny, and when it’s not.
Years ago, I introduced a bug tracking system at a software company, and the most lasting lesson I learned from that process has nothing to do with software. You see, even though it took a while to accomplish this, I eventually trained our company to use this system to finally be able to answer the highest and greatest question of any operationally-oriented workgroup:
What the hell is going on?
It astounds me on a pretty regular basis how utterly lost people (and organizations) are with respect to this simple question, and I think it ultimately stems from people trying to relate to one another like people, rather than as parts of an organization. While this is completely understandable — even virtuous — the uncertainty and passivity of polite society are inefficient and confusing. Ironically, the more polite everyone tries to be, the more tension builds as problems fester and grow.
Like ripping off a band-aid, though, I’ve found that setting a few simple rules, and then sticking to them rigorously, so thoroughly squelches the confusion about “who’s doing what” that the pain of working the system quickly gives way to the relief of actually knowing what the hell really is going on.
Before jumping into the rules, understand that this approach applies most directly to tasks in the context of an organization. If you’re unsure of the difference, here’s a clue: if you’re talking about something the organization would expect to get done even if the person who normally does it is sick or on vacation, then it’s an organizational task. If it’s going to wait for you no matter how long you’re out of the office, then you don’t really need to bother with this at all — just work your own to-do list.
So, without further ado, the rules:
Have a list. Use a whiteboard or a spreadsheet or scrabble tiles, but whatever you use, treat it as the one and only definitive source of truth and enlightenment for your organization (with respect to the subject of the list, of course). Given that tools like Bugzilla are available for free, scrabble tiles would really be sort of silly, but the important part is that the list is kept sacred. Better yet, tools that are built for this sort of thing will help enforce the rest of these rules.
The list belongs to the organization, but assignments belong to people. You’re trying to map the somewhat nebulous idea of organizational responsibility to actions of individual people, so this part is important.
Someone needs to be responsible for the list. I’m not talking about items on the list — I’m talking about the integrity and life-force of the list itself. This is a full-time job, and it must be staffed all the time, so be ready to pass this baton when your list-owner is sick, on vacation, or out of the office. Your organization will eventually self-train to keep the list alive, but this takes serious time and dedication (think years, not months or days).
Issues belong to a single person. Again, with the one and only one person. It’s important.
Issues can change owners. They should, in fact, or you’re probably not talking about an organizational issue. Most of the time, issues are created precisely because the person who finds the issue isn’t the person who needs to do something about it.
If it ain’t in the list, it never happened. Watch out for stuff happening “off the books”. I can’t tell you how many times I experienced people sending an email about an issue because they didn’t want to take the extra thirty seconds necessary to add something to the bug tracking system, but in return for saving those few seconds, your organization will give up all the management oversight and quality control your list gives you. I personally saw the single worst “off-list” violator in my organization turn in to the biggest “list enforcer” over the course of a couple years because the list works.
Ok, I lied. Scrabble tiles suck. In most cases, you really want a place to show history, emails, attachments, and so on, because most issues complicated enough to need a system gather little artifacts like this as they’re worked, and if it ain’t in the list, it never happened. One of the points of tracking stuff in the list instead of email is that as these issues pass from one person to the next, the list holds the whole thread and context of the issue – very unlike email, where information goes to die.
A quiet list is a dead list. If you don’t see activity in the list, it sure as hell doesn’t mean there aren’t any issues. It means nobody’s using the list. The list owner should have a good enough sense for the “pulse” of issues to know if this ever happens, by the way.
A quiet issue is a dead issue. Sounds familiar, no? If there’s no action on an issue, somebody needs to know why. If nobody cares about it all that much, make sure it gets shunted to a side track of some sort, but if it’s an issue that needs to be dealt with, find out why it’s stalled and address the problem. This is “project manager 101” stuff, but it’s really easy to miss unless you’ve got a living list, and really hard to miss if you’re really working a list.
There’s more you can do with issue tracking once you get this far, but I promise if you embrace the rules here in a meaningful way, your entire organization will take a step up in efficiency and effectiveness, and best of all, you’ll eliminate a whole lot of the latent tension that comes from trying to manage implicit lists.
I was asked recently to define “craftsmanship” in software development, and I thought this would make a great topic for discussion. You can obviously find a dictionary definition for craftsmanship, but much like with architecture, I think that when we attach the additional context of software development, craftsmanship introduces some important ideas about how individual contributors fit into a larger software development organization.
Software as Art
The concept of craftsmanship in general emphasizes the skills of individuals, and connotes high-quality, highly customized work. It is very much the antithesis of mass production.
This general understanding, when applied to software development, implies a very individualized experience — indeed, if you think about buying a work of art or even something like a piece of furniture, “craftsmanship” would imply that the product you’re buying is one-of-a-kind. Nobody, literally, has another item quite like the one you’re buying. For businesses commissioning custom software, there’s a nugget of truth here, of course — one of the reasons to write custom software is the presumption that your business requirements are different in some way from everybody else’s requirements, and your business demands the flexibility that comes with custom software.
Another key aspect of craftsmanship is the deep and broad skill set of the craftsman. Just as you’d expect a hand-crafted product to be almost entirely the result of one man’s work (to the extent that signing the work wouldn’t be unheard-of), so software craftsmanship emphasizes individual skill and accountability. And while craftsmanship doesn’t necessarily imply apprenticeship, a craftsman is certainly expected to be highly experienced and able to draw on years of work in the field in order to produce the sort of high-quality work that’s expected.
While high-quality, highly-customized software sounds very appealing from a business perspective, the deliberate pace and unyielding attention to detail we most often associate with craftsmanship is petrifying. Business software needs to move at the pace of business, after all, doesn’t it? I really believe that the implied slowness of craftsmanship acts as an impenetrable stigma against its adoption, which I think is largely unfortunate.
One of the key benefits of the higher-quality code produced by accomplished craftsmen is that less time is spent on an ongoing basis recovering from the sins of shoddy coding as these systems grow and evolve over time. High-quality code not only works better, it’s more maintainable, so that initial effort can pay real dividends over the life of a long-lived system. Where we sometimes talk about sloppy code as being high in technical debt, quality code can be seen as an asset for an organization.
Right now, software craftsmanship remains on the fringe of development practices, and thus tends to be favored in smaller shops, as well as in “green-field” projects much more so than in larger, more established environments. It remains to be seen whether it eventually transitions into mainstream use the way that Agile has, but I think a lot depends on the effective communication of the real benefits of craftsmanship, as well as on mature tactics to introduce and manage craftsmanship in the organization.
Craftsmanship is about individual contribution. It’s about taking pride in your work, and it’s about a relentless journey of improvement. The scope of these experiences is best-suited to small groups of developers who can help one another grow and hold each individual accountable to the group. This, all by itself, is a tall order. It’s not easy for anyone to expose himself to criticism, and that’s a huge prerequisite for this sort of culture to form. This sort of open sharing takes huge amounts of trust, and trust takes time to build.
Another critical motivator for a “craftsman-like” organization is the sense that the team is building something that’s going to last. When you think of hard goods built by craftsmen, you equate the higher quality to a longer lifespan — as a consumer, you may be willing to pay a premium for a product that will outlast a cheaper product because you know it’ll last a long, long time. In software, this understanding of quality has to be visible to the team as well as the customer, and the team needs to believe that this higher quality is valued. Few things are as demotivating for a developer as seeing a great body of code mangled by someone who doesn’t share the same capabilities or desire for quality.
It might seem at this point that I’m advocating an all-or-nothing strategy for software development, or that software craftsmen can exist only in small shops, but I don’t believe either of these is necessarily true. I believe that if craftsmanship is going to exist in a large organization, however, the organization needs to understand where it’s going to be applied and it must organize its teams to support craftsmanship in targeted areas. I think this prerequisite is well-suited to a component-based Enterprise Architecture, in which well-defined components or services are identified, invested in, maintained with care, and protected, and yes — this demands an enlightened and skilled application of Enterprise Architecture.
Craftsmanship isn’t a magic wand, or a buzzword, or a slogan. Craftsmanship takes hard work, and it demands commitment from organizations and developers. It’s not right for every organization, and it certainly isn’t right for all developers, but if you need software that stands the test of time, craftsmanship will pay dividends for the lifetime of that software.