Customer Lifecyle Architecture

If you've gone through any "intro to cloud development" presentations, you've seen the argument for cloud architecture where they talk about the needs of infrastructure scaling up and down. Maybe you run a coffee shop bracing for the onslaught of pumpking-spice-everything, or maybe you're just in a 9-5 office where activity dries up after hours. In any event, this scale-up / scale-down traffic shaping is well-known to most technologists at this point.

But, there's another type of activity shaping I'd like to explore. In this case, we're not looking at the aggregate effect of thousands of PSL-crazed customers -- we're going to look at the lifecycle of a single customer.

Consider these two types of customers -- both very different from one another. The first profile is what you'd expect to see for a customer with low aquisition cost and low setup activity -- an application like Meta might have a profile like this, especially when you expect users to keep consuming at a steady rate.

Compare this to a customer with high acquisition cost and high setup activity -- this could be a customer with a long sales lead time or one with a lot of setup work, configuration, training, or the like. Typically, these are customers with a higher per-unit revenue model to support this work. Examples of a profile like this could be sales of high-dollar durable goods, financial instruments like loans, or insurance policies, and so on.

So, What?

Why, exactly, would we care about a profile like this? I believe this sort of model facilitates some important conversations about how a company is allocating software spend relative to customer activity, costs, and revenue, and I also think an understanding of this profile can help us understand the architectural needs of services supporting this lifecycle.

Note that this view of customer activity has some resemblance to a discipline called Customer Lifecyle Management (CLM). Often assoociated with Customer Relationship Mangement (CRM), CLM is typically used to understand activities associated with acquiring and maintaining a customer using five distinct steps: reach, acquisition, conversion, retention and loyalty.

Rather than just working to understand what's happening with your customer across the time axis, consider some alternative views. We can map costs and revenue per customer in a view like this, for example. Until recently, the cost to acquire a customer could be known only on an aggregate basis - computed based on total costs across all customers and allocated as an estimate. But as finops practices improve in fidelity and adoption, information like this could become a reality:

FinOps promises to give us the tools to track the investment in software you've purchased, SaaS tools, and of course, custom tools and workflows you've developed. Unless these tools are extremely mature in your organization, you'll likely need to allocate costs to compute these values. Also note that these costs are typically separated on an income statement - those occurring pre-sales tend to show up as Cost of Goods Sold (GOGS), while those after the sale are operating expenses (OpEx), and if you intend to allocate COGS, these will need to be interpolated, as none of the customers you don't close generate any revenue at all.

Less commonly considered is the architectural connection to a view like this. You might see a pattern like this if you're writing an insurance policy and then billing for it periodically:

Not only do these periods reflect different software needs, they also reflect different oppportunities to learn about your customer. Writing a new insurance policy is critically-important to an insurance company -- careful consideration is made to be sure the risk of the policy is worth the premium to be collected. For that reason, an insurance company will invest a lot of time and energy in this process, and much will be learned about the customer here:

On the other hand, the periodic billing cycle should be relatively uneventful for customer and carrier alike, and less useful information is found here -- at least when the process goes smoothly.

I believe the high-activity / high-learning area also suggests a high-touch approach to architecture. Specifically, I think this area is likely where highly-tailored software is appropropriate for most enterprises, and it also likely yeilds opportunities for data capture and process improvement. Pay attention to data gathered here and take care not to discard it if possible. Considering these activity profiles temporally may also lead us to identify separation among services, so in this case, the origination / underwriting service(s) are very likely different from the services we'd use after that customer is onboarded:

What's next?

I'm early in exploring this idea, but I expect to use this as one signal to influence architectural decisions. I expect that an acvity-over-time view of customers will likely help focus conversations about technologies, custom vs. purchased software, types of services and interfaces, and so on.

At a minimum, I think this idea is an important part of modeling services to match activity levels, as not all customers hit the same peaks at the same time.

When is an event not an event?

Events, event busses, message busses and the like are ubiquitous in any modern microservice architecture. Used correctly, events support scalability and allow services to evolve independently. Used poorly, though, events create a slow, messy monolith with none of those benefits. Getting this part of your architecture right is crucial.

The big, business-facing events are easiest to visualize and understand - an "order placed" event needs no introduction. This is the sort of event that keeps a microservice architecture flexible and decoupled. The order system, when it raises this event, knows nothing about any consumers that might be listening, nor what they might do with the event.

sequenceDiagram participant o as Order Origination participant b as Event Bus participant u as Unknown Consumer o ->> b : Order Placed b ->> u : Watching for Orders

Also note that this abstract event, having no details, also doesn't have a ton of value, nor does it have much potential to create conflicts if it changes. This is the "hello, world" of events, and like "hello, world", it won't be too valuable until it grows up, but it represents an event in the truest sense -- it documents an action that has completed.

A more complete event will carry more information, but as you design events, try to keep this simple model in mind so you don't wind up with unintended coupling system-to-system when events become linked. This coupling sneaks into designs and becomes technical debt -- in many cases befor you realize you've acrued it!

The coupling of services or components is never as black & white as the "hello, world" order placed event - it's a continuum representing how much each service knows about the other(s) involved in a conversation. The characterizations below are a rough tool to help understand this continuum.

When I think about the characterization of events in terms of coupling, the less coupling you see from domain-to-domain, the better (in general).

Characteristics of high coupling:

  • Events produced with no knowledge of how or where they will be produced. Note that literally not knowing where an event is used can become a big headache if you need to update / upgrade the event. Typically, you'd like to see that responsibility fall to the event transport rather than the producing domain, though.
  • A single event may be consumed by more than one external domain. In fact, the converse is a sign of high coupling.

Characteristics of high coupling:

  • Events consumed by only one external domain. I consider this a message bus-style event - you're still getting some of the benefits of separating domains and asynchronous processing, but the domains are no longer completely separate -- at least one knows about the other.
  • Events that listen for responses. I consider these to be rpc-style events, and in this case, both ends of the conversation know about the other and depend to some extent on the other system. There are places where this pattern must be used, but take care to apply it sparingly!

If you see highly-coupled events - especially rpc-style events, consider why that coupling exists, and whether it's appropriate. Typically, anything you can do to ease that coupling will allow domains to move more freely with respect to one another - one of the major reasons you're building microservices in the first place.


A little more on events: State vs Events

“Hello world” is always beautiful

Every programming language you've ever learned began with "Hello world".

"Hello World" is typically a couple lines long, and it's always simple and easy to understand. You probably moved on to something like a to-do list after that -- also enticingly simple and easy.

Now, stop for a moment and consider the last bit of production code you touched. Not so beautiful, right?

There are a couple takeaways from this juxtaposition. First, any language / framework looks great in that introductory scope. Not only is a simple use case a godsend to show how elegant a tool is, any tool will pick a domain / scenario that suits their tool when they create that demo app.

The second takeaway is a little more reflective. Remember the joy of learning that new language, the freedom you felt looking at the untarnished simple code, and the optimism you felt when you started to extend it. Now, flip back to that production code again. What can you do to get some of that simplicity and usability back to your production code? Certainly, it'll never look like "hello, world", but I think it's worth a moment's reflection every now and again to see if you can get just a little closer.

Ten years of Nvidia

I haven't been writing actively for a couple years, and having a breath to catch up, I'm working on remediating that absence. I just took a quick scan through drafts I'd started, and I tripped over one from 2013 (yes, ten years ago). The draft was really an email I'd sent pasted into the editor to be picked up later. At the time, it may have been somewhat interesting, but in hindsight, I think it may be more impactful. Here's the email - the original context was me trying convince my ex-wife that a game development class my son was interested in wasn't a waste of time:

I sort of hinted at some reasons I thought the "intro to gaming" class [my son] was looking at next semester might be more useful than it sounds, but I probably wasn't very clear about it.

Here's an announcement from this morning from a graphics card mfg:

http://www.anandtech.com/show/7521/nvidia-launches-tesla-k40

http://www.engadget.com/2013/11/18/nvidia-unveils-tesla-k40-and-ibm-deal

http://www.anandtech.com/show/7522/ibm-and-nvidia-announce-data-analytics-supercomputer-partnership

If you actually look at the picture of the card, you'll notice there's nowhere to plug in a monitor, because this graphics card isn't really a graphics card in the same sense that we think of them.

As these cards have become crazy-specialized & powerful in the context of their original purpose, their insane number-crunching capabilities haven't been lost on people beyond game developers.  In the same way that you see these cards supporting physics engines in games, they're now becoming more and more commonly used in scientific and engineering applications because this same type of computing can used for simulations and other scientific uses.

As I mentioned to [my son] when he was working in Matlab, the type of programming needed for this sort of processing is very different from the traditionally-linear do-one-thing-then-do-the-next-thing style of programming used for "normal" CPU's -- by its nature, it has to be asynchronous and massively parallel in order to take advantage of the type of computing resources offered by graphics-type processors.

Cutting to the chase, even though "game programming" probably sounds like a recreational activity, I think there's a decent chance that some of the skills touched on in the class might translate reasonably well into engineering applications -- even if he doesn't necessarily see that during the class.

Damn. Right on the money. Ten years on, Nvidia rules AI. Why? It's the chips and the programming model. All the reasons I cited to give game programming a chance have come to pass, and for what it's work, the kid wound up using Nvidia GPUs to build neural networks in grad school.

Game programming, indeed.

To SaaS or not to SaaS

I just saw an announcement from 37signals about a “new” initiative to sell old-fashioned purchased, installed-on-premise software. Is this a legitimate reversal of the seemingly inevitable march to SaaS software, or merely another option for customer?

My first reaction: this is a breath of fresh air. Personally, I’ve been miffed seeing software I’d purchased when I wanted and upgraded when wanted move to a subscription model. It messed with my sense of self-determination. But when I took five minutes to work out the math, the subscription model worked out to just about what I’d been spending on purchases and upgrades. As an individual consumers, that stigma was all in my head.

But 37signals sells enterprise software, and for enterprise software, there are some pretty interesting implications if the pendulum starts to swing back. I’m interested to see how some of these turn out.

One of the largest implications of the SaaS model for companies is way these purchases impact accounting. While purchased software is typically treated as a capital investment that’s amortized with expenses incurred over the life of the software, the SaaS model typically shows directly as expenses. I’m curious whether this has factored into the shift back to purchase-once software.

Another defining characteristic of SaaS is its continual delivery model. Agile methodologies in general favor frequent deployment, and a true CI/CD model can see this happen very frequently — much more frequently than any enterprise would want to install on-prem software. So, an on-prem delivery model implies going back to distributing releases and hot-fixes that will be installed by customers. Anyone who’s ever supported that model will tell you it’s no piece of cake.

Finally, the announcement from 37signals triggered a mental model that I believe exists in a lot of business decision-makers. The model is rooted in that same CapEx accounting model, and suggests that software can be bought, installed, and left to run as you would treat a filing cabinet. While this model may be appropriate for some applications that change very infrequently, I think this idea can prove harmful when applied to software that needs to grow and change with a business. There’s a nuanced view of the care and feeding and evolution of an application that can be lost in the filing cabinet model, and I’m frankly nervous to see that model perpetuated.

I’ll be watching this development from 37signals as it’s rolled out. They’ve always been thought leaders in the industry, and I’m curious to see if this the beginning of a pendulum swing back toward a self-hosted model.

Team Topologies and Conway’s Law

Most of the reading I've done lately has been combinatorial in nature; that is, each book I pick up seems to reference another book I feel compelled to add to my reading list. My latest, Team Topologies, has been no different. This book has a pretty dry title but a really compelling pitch: it aims to help companies finally sort their org charts.

The book is structured with a "choose your own adventure" feel with an introductory section and a series of deeper dives that can apply more or less to different scenarios. In this way, a reader can skip straight to a section that's likely to apply to a problem they're experiencing. In working through the the first couple chapters, though, I was already experiencing "aha" moments.

A central source of inspiration for the ideas in Team Topologies is Conway's Law, which features prominently in the foundational chapters, Conway's Law suggests that organizations build systems that mimic their communication channels, so Team Topologies authors Matthew Skelton and Manuel Pais believe that the human factors of organizational design and the technical factors of systems design are inexorably intertwined, and in digesting this book, I found myself reflecting on scenarios that bear out this premise.

If there's a vulnerability in this book at all, it's that intertwining of organization and technical design. Strictly speaking, it's not a problem with the book at all; rather, it's an inexorable implication of Conway's Law. To the extent you buy into that relationship, it means that no software organization problem is merely technical and no organizational problem can be fixed by sliding names around on an org chart. Ostensibly, this may make change seem harder. In reality, I believe it points out that no change you may have believed easy was ever really changed at all.

Author Allan Kelly has done some writing on Conway's Law, as well, and I believe his assertion here carries deep implications:

More than ever I believe that someone who claims to be an Architect needs both technical and social skills, they need to understand people and work within the social framework. They also need a remit that is broader than pure technology -- they need to have a say in organizational structures and personnel issues, i.e. they need to be a manager too.

https://www.allankellyassociates.co.uk/archives/1169/return-to-conways-law/

Perhaps this connection of technology and organization helps explain the tremendous challenges found in digital transformation. I'm reminded of a conference speech I attended several years ago. The speaker was a CTO / founder of a red-hot technical startup, and he was sharing his secrets to organizational agility. "Start from scratch and do only agile things," to paraphrase, but only slightly. Looking around the room at the senior leaders from very large corporations in attendance, I saw almost exclusively despair. While a neat summation of this leader's agile journey, that approach can't help affect change in the slightest.

Does Team Topologies help this problem? Indirectly, I believe it can. Overwhelmingly, just highlighting the deep connection between people, software, and flow is a huge insight. Lots of other books and presentations kick in to address flow, but the background and earlier research docuented by Skelton and Pais in the early chapters of the book helped create a deeper sense of "why" for me. The lightbulb moment may be trite, but it's appropriate for me in this case. I found the book really eye-opening, and it's earned a spot on my short list of books for any business or technical leader.

Design resources for developers

Keeping abreast of technology updates has always been a formidable job.  As always, there are all sorts of options for consuming the fire hose of news.  For me, RSS feeds (I use Feedspot for my reader now) from some trusted sources remains a favorite, and I'd like to share a couple of gems that have been really fantastic for a number of years now.

Both of these are design-oriented sites, focusing on UI technologies, techniques, and reviews as well as design theory, layout reviews, and the like.  Given my typical focus on back-end design and architecture, these sites are a great way to bolster by front-end perspective and give me some tools to jump-start UI work.  I've been following both of these sites long enough that I honestly can't remember where I found them, but I'm really impressed with the track record for both.

Honkiat.com (odd name, I know, but it's good stuff) is a real grab-bag of articles, so be prepared to filter out some topics you may not be interested in.  If you sift through to topics that are of interest, though, there are some really good resources.  In most cases, these articles are annotated link collections, so don't expect to find source code here.  Do expect to see examples of UI tools and technologies put to use with links to source sites where you can learn more.   A quick flip through recent articles shows topics like these:

Smashing Magazine tends to cover fewer topics more deeply, with more of a focus on their own content, vs curation of content from other sources.  This is a great source for tutorials and backgrounders, and they've published a number of books based largely on rolled-up content from the site.  Again, not everything will be of interest, but the quality of the content that's here is really high.  Here's a quick sampling of some recent articles from Smashing Magazine:

If you're looking for a couple good streams of design inspiration, give these two a look, and if you've got any other favorites, let me know!

Agile Enterprise, Part 1

I’ve recently had occasion to see the same challenge pop up in a couple different settings -- namely, bridging the gap between agile development and integration into an enterprise.  In both of the cases I saw, development employing agile practices did a good job of producing software, but the actual release into the enterprise wasn’t addressed in sufficient detail, and problems arose - in one case, resulting in wadding up a really large-scale project after a really large investment in time, energy, and development.

Although the circumstances, scope, and impact of problems like this vary from one project to the next, all the difficulties I’ve seen hinge on one or more of these cross-cutting areas: roadmap, architecture, or testing / TDD / CI.

Of the three of these, roadmap issues are by far the most difficult and the most dangerous of the three.  Creating and understanding the roadmap for your enterprise requires absolute synchronization and understanding across business and IT areas of your company, and weaknesses in either area will be exposed.  Simply stated, software can’t fix a broken business, and business will struggle without high-quality software, but business and software working well together can power growth and profit for both.

Roadmap issues, in a nutshell, address the challenge of breaking waterfall delivery of enterprise software into releases small enough to reduce the risk seen in a traditional delivery model.  The web is littered with stories of waterfall projects that run years over their schedule, finally failing without delivering any benefit at all. Agile practice attempts to address this risk with the concepts of Minimum Viable Product (MVP) and Minimum Marketable Feature (MMF), but both are frequently misunderstood, and neither, by themselves, completely address the roadmap threat.

MVP, taken at its definition, is really quite prototype-like.  It’s a platform for learning about additional features and requirements by deploying the leanest possible version of a product.  This tends to be easier to employ in green-field applications -- especially ones with external customers -- because product owners can be more willing to carve out a lean product and “throw it out there” to see what sticks.  Trying to employ this in an enterprise or with a replace / rewrite / upgrade scenario is destined to fail.

Addressing the same problem from a slightly different vector, MMF attempts to define a more robust form of “good enough”, but at a feature-by-feature level.  In this case, MMF describes functionality that would actually work in production to do the job needed for that feature -- a concept much more viable for those typical enterprise scenarios where partial functionality just isn’t enough.  Unfortunately, MMF breaks down when you compound all the functionality for each feature by all the features found in a large-scale enterprise system. Doing so really results in vaulting you right back into the huge waterfall delivery mode, with all its inherent pitfalls.

In backlog grooming and estimation, teams look for cards that are too big to fit into a sprint -- these cards have to be decomposed before they can fit into an agile process.  In the same way, breaking huge projects down by MVP or MMF also must occur with a consideration for how release and adoption will occur, and releasing software that nobody uses doesn’t count!

Architects and developers recognize this sticking point, because it’s the same spot we get into with really large cards.  When we decompose cards, we look for places to split acceptance criteria, which won’t always work for enterprise delivery, but with the help of architecture, it may be possible to create modules that can be delivered independently.  Watch for that topic coming soon.

Armed with all the techniques we can bring to bear to decompose large-scale enterprise software, breaking huge deliveries into an enterprise roadmap will let you and your organization see software delivery as a journey more than as a gigantic event.  It’s this part that’s absolutely critical to have embraced by both business and IT. The same partnership you’re establishing with your Product Owners in agile teams has to scale up to the whole enterprise in order for this to work. The trust you should be building at scrum-team-scale needs to scale up to your entire enterprise.  No pressure, right? Make no mistake -- enterprise roadmap planning must be visible and embraced at the C-level of your enterprise in order to succeed.

Buy-in secured, a successful roadmap will exhibit a couple key traits.  First, the roadmap must support and describe incremental release and adoption of software.  Your architecture may need attention in order to carve out semi-independent modules that can work with one another in a loosely-coupled workflow, and you absolutely need to be able to sequence delivery of these modules in a way that lets software you’ve delivered yield real value to your customers.  The second trait found in a roadmap of any size at all is that it’s nearsighted: the stuff closer to delivery will be much more clearly in-focus than the stuff further out. If you manage your roadmap in an agile spirit, you’ll find that your enterprise roadmap will also change slightly over time -- this should be expected, and it’s a reflection of your enterprise taking on the characteristics of agile development.

Next up, I’ll explore some ways architecture can help break that roadmap into deliverable modules.

 

Related

To agility and beyond: The history—and legacy—of agile development

 

Why Agile Fails in Large Enterprises

The Long, Dismal History of Software Project Failure

Adding insult to injury

Software developers are a clever lot, and prone to bouts of creativity every once in a while.  It turns out these are essential traits when building software, but cleverness also needs to be must be tempered when it impacts the user-facing parts of your software.

Please don't do this.
(Bugzilla's "no results found" message -- please don't do this)

Case in point:  Bugzilla's search results page.  This is what happens when you try searching for something in Bugzilla and it doesn't find any results.  It's supposed to be funny -- the misspelling is, in fact, intentional.

But it's not funny.  It's really, really not funny.  It's not funny for two very specific and very important reasons.

Reason 1: Usage context.  If any Bugzilla user ever sees this message, it's because he failed at the task he was trying to accomplish.  Since the message exists solely to explain to the user that he failed, it's pretty reasonable to assume that the user might not be in the best of moods already.  I know I wasn't.

Reason 2: Product context.  If you've not already had the pleasure of using Bugzilla, let me fill you in on its search capabilities: they suck.  Like, searching in Bugzilla is not only unpleasant, it's also unfruitful way too often.  It's the best reminder you're likely to see about why Google won the search wars -- it's because everything else used to work like Bugzilla.  So when your (otherwise excellent) product has a critical flaw, such as searching in Bugzilla, it might be best to not choose that specific part of your product to try to crack a funny.  Just sayin'.

The lesson in all this?  Somewhere on any product team, there needs to be a voice of reason who's looking at context stuff like this and deciding when it's time to be funny, and when it's not.

What is Craftsmanship?

I was asked recently to define "craftsmanship" in software development, and I thought this would make a great topic for discussion.  You can obviously find a dictionary definition for craftsmanship, but much like with architecture, I think that when we attach the additional context of software development, craftsmanship introduces some important ideas about how individual contributors fit into a larger software development organization.

Software as Art

IMG_5891.jpg
Ladle (Photo credit: lambertpix)

The concept of craftsmanship in general emphasizes the skills of individuals, and connotes high-quality, highly customized work.  It is very much the antithesis of mass production.

This general understanding, when applied to software development, implies a very individualized experience -- indeed, if you think about buying a work of art or even something like a piece of furniture, "craftsmanship" would imply that the product you're buying is one-of-a-kind.  Nobody, literally, has another item quite like the one you're buying.  For businesses commissioning custom software, there's a nugget of truth here, of course -- one of the reasons to write custom software is the presumption that your business requirements are different in some way from everybody else's requirements, and your business demands the flexibility that comes with custom software.

Another key aspect of craftsmanship is the deep and broad skill set of the craftsman.  Just as you'd expect a hand-crafted product to be almost entirely the result of one man's work (to the extent that signing the work wouldn't be unheard-of), so software craftsmanship emphasizes individual skill and accountability.  And while craftsmanship doesn't necessarily imply apprenticeship, a craftsman is certainly expected to be highly experienced and able to draw on years of work in the field in order to produce the sort of high-quality work that's expected.

Business Implications

While high-quality, highly-customized software sounds very appealing from a business perspective, the deliberate pace and unyielding attention to detail we most often associate with craftsmanship is petrifying.  Business software needs to move at the pace of business, after all, doesn't it?  I really believe that the implied slowness of craftsmanship acts as an impenetrable stigma against its adoption, which I think is largely unfortunate.

One of the key benefits of the higher-quality code produced by accomplished craftsmen is that less time is spent on an ongoing basis recovering from the sins of shoddy coding as these systems grow and evolve over time.  High-quality code not only works better, it's more maintainable, so that initial effort can pay real dividends over the life of a long-lived system.  Where we sometimes talk about sloppy code as being high in technical debt, quality code can be seen as an asset for an organization.

Right now, software craftsmanship remains on the fringe of development practices, and thus tends to be favored in smaller shops, as well as in "green-field" projects much more so than in larger, more established environments.  It remains to be seen whether it eventually transitions into mainstream use the way that Agile has, but I think a lot depends on the effective communication of the real benefits of craftsmanship, as well as on mature tactics to introduce and manage craftsmanship in the organization.

My Take

Craftsmanship is about individual contribution.  It's about taking pride in your work, and it's about a relentless journey of improvement.  The scope of these experiences is best-suited to small groups of developers who can help one another grow and hold each individual accountable to the group.  This, all by itself, is a tall order.  It's not easy for anyone to expose himself to criticism, and that's a huge prerequisite for this sort of culture to form.  This sort of open sharing takes huge amounts of trust, and trust takes time to build.

IMG_8296.jpg
(Photo credit: lambertpix)

Another critical motivator for a "craftsman-like" organization is the sense that the team is building something that's going to last.  When you think of hard goods built by craftsmen, you equate the higher quality to a longer lifespan -- as a consumer, you may be willing to pay a premium for a product that will outlast a cheaper product because you know it'll last a long, long time.  In software, this understanding of quality has to be visible to the team as well as the customer, and the team needs to believe that this higher quality is valued.  Few things are as demotivating for a developer as seeing a great body of code mangled by someone who doesn't share the same capabilities or desire for quality.

It might seem at this point that I'm advocating an all-or-nothing strategy for software development, or that software craftsmen can exist only in small shops, but I don't believe either of these is necessarily true.  I believe that if craftsmanship is going to exist in a large organization, however, the organization needs to understand where it's going to be applied and it must organize its teams to support craftsmanship in targeted areas.  I think this prerequisite is well-suited to a component-based Enterprise Architecture, in which well-defined components or services are identified, invested in, maintained with care, and protected, and yes -- this demands an enlightened and skilled application of Enterprise Architecture.

Craftsmanship isn't a magic wand, or a buzzword, or a slogan.  Craftsmanship takes hard work, and it demands commitment from organizations and developers.  It's not right for every organization, and it certainly isn't right for all developers, but if you need software that stands the test of time, craftsmanship will pay dividends for the lifetime of that software.