UX Lessons from Visual Studio

Commonly, when people think "user experience", they think about screen designs. Apps, web pages, Figma -- all that stuff. The shift from UI design to UX alone is a nod to this practice being more than skin-deep, but I still think a lot of the deeper behavioral aspects of user satisfaction are lost on many -- especially people who have only a casual interest in UI/UX.

I believe there are clues all around us, though, that we can take advantage of if we pay attention. I'll begin with the premise / reminder that real usability is completely invisible. You can only see usability by the absence of impediments to usability. In short, a system is usable when it does exactly what a user expects, even if they can't articulate those expectations.

One of the best examples of invisible usability can be found in Visual Studio, which has been a market-leading IDE for decades now. Built by developers for developers, they've gotten a lot of things right, and even though you're probably not building a tool for use by developers, some of these lessons likely can apply for you, too.

Starting Fresh

Right off the bat, Visual Studio shows its usability by setting users up to be successful when working with a new project. Pick any template you like in Visual Studio, create a new project from it, and it will build successfully. While it might not seem like a big deal, it makes a lot of difference for a new user learning one of these technologies. The ability to start from a solid foundation gives that user confidence to build on that new solution.

What's the lesson? Be aware of the "getting started" experience for your users. Anything you can do to start a user more quickly and with more confidence will help them start using your software with confidence. When users get started with confidence and gain momentum, they're more likely to persevere a bit more if they get stuck later, too.

Save Always

Another subtle way Visual Studio helps is one you probably never gave a second thought. No matter what state your working files are in, you'll never mess one up so badly that Visual Studio won't let you save your work:

A more elegant take on this is to save automatically, so if something goes really wrong, you have a recovered version of your work (which Visual Studio obviously does). Note that automatically saving can actually introduce some small usability hiccups of its own, but that's a problem for another day.

What's the lesson? Document-based applications like Visual Studio and office apps (spreadsheets, docs and the like) typically consider this sort of behavior table stakes these days, and it's not likely you're building one of these, but watch out for behaviors that cause a user to not be able to save work-in-progress. In a UI, you may run into this if you've got a bunch of required fields on a page, and as the field count goes up (and maybe the page count, too), expect that your users' frustration level will go up, too.

It's actually a lot more likely that you'll run into this problem in your APIs. For example, here's a simple method I used in a post about fluent validation:

        public ActionResult CreateDealer(Dealer dealer)
        {
            Guard.Against.AgainstExpression<int>(id => id <= 0, dealer.Id, "Do not supply ID for new entity."); 

            var dvalidator = new DealerValidator();
            var result = dvalidator.Validate(dealer);
            if (result.IsValid)
            {
                _sampleData.Add(dealer);  // SaveChangesAsync() when working with an actual data store
                return CreatedAtAction(nameof(CreateDealer), new { id = dealer.Id }, dealer);
            }
            else
            {
                return BadRequest(result);
            }
        }

While this method effectively validates inputs, the return results to the user aren't especially helpful, and as an object like this scales up in size, it's more likely that a consumer will have partial inputs that don't pass all validation criteria. Rather than preventing this from saving altogether, consider structuring an object like this so that as many values as possible are optional for saving -- even if they're needed in order to have a "valid" object.

In this simple example, we can create a result type that appends a ValidationResult property to the model. We'll return this instead of the "naked" object to show the validation propertied next to the model. The result object is preferrable to embedding a ValidationResult in the model because this keeps the request intact -- there's no reason for the ValidationResult to be part of the CRUD requests.

    public class ModelValidationResult<M, V> where V : AbstractValidator<M>, new()
    {
        public ModelValidationResult(M model)
        {
            this.Model = model; 
        }
        public M Model { get; private set; }
        public FluentValidation.Results.ValidationResult ValidationResult { get; set; }
        public bool Validate()
        {
            V validator = new V();
            this.ValidationResult = validator.Validate(this.Model);
            return ValidationResult.IsValid;
        }
    }

With Validate in the Dealer object, the controller method also becomes a bit simpler:

         public ActionResult CreateDealer(Dealer dealer)
        {
            Guard.Against.AgainstExpression<int>(id => id <= 0, dealer.Id, "Do not supply ID for new entity.");

            ModelValidationResult<Dealer, DealerValidator> res = new ModelValidationResult<Dealer, DealerValidator>(dealer);
            res.Validate();
            _sampleData.Add(dealer);  // SaveChangesAsync() when working with an actual data store
            return CreatedAtAction(nameof(CreateDealer), new { id = dealer.Id }, res);
        }

Note that I left the guard clause in place here -- there will still be some validations that really need to prevent saving bad data, but now that these changes are in place, it's possible to save an object with incomplete data.

This technique won't work in all cases, but consider it if you can support saving objects with a minimal set of data and check for a fully-populated object later.

And more!

If you view Visual Studio with an eye to borrowing UX techniques, you'll see more lessons like these - the Dynamic Validation example I covered earlier is an example. Since you're probably not building an IDE or even a tool for developers, you'll need to interpret some of the techniques liberally, but I assure you the lessons are there. You may also see some negative usability examples -- in fact, sometimes these are easier to see because they get in our way and draw our attention.

If you're interested in source code for this examle, you can find it on Github: https://github.com/dlambert-personal/CarDealer/tree/Guard-vs-validate

Expected and Actual

I'm a fan of simple hooks to summarize complex ideas, and one of my favorite go-to's is simple, memorable and versatile:

Expected == Actual

Not much to look at by itself, but I've gotten a lot of mileage out of this in some useful contexts. You probably recognize one of the most obvious applications immediately: testing / QA. We're well-trained in applying expected == actual in testing, and most bug-reporting contexts will define a bug as a scenario in which actual behavior doesn't match expected behavior.

Even in this simplest application, all the value here is in how we exercise the framework, and I promise if you really work this equation, you'll have a better handle on bugs immediately. Of the two sides of this equation, "expected" usually seems easier, but is almost always more difficult becuase we most of our expectations are unspoken. If I had a dime for every bug report I've seen where "actual" was well-documented but "expected" existed only in the mind of the reporter, for instance, I'd be writing this from a beach in Hawaii.

Of course, that's where the exercise begins to get interesting. "What were you expecting" is a great place to start here. If you can trace expectations to real documented requirements, it's a short conversation (perhaps followed by another conversation to see why that scenario was missed in autometed testing). It's not unheard-of, though for these bugs to occur in scenarios that weren't explicitly called-out in requirements. If this is the case, be sure to examine "expected" to see if it's reasonable for most users to expect the same thing under those conditions, which is a great segue to my next favorite application of expected == actual.

I'm of the opinion that expected == actual is also a pretty good foundation for understanding usability. I touched on this recently in a post about API usability, and it holds true generally, I believe. In that post, I referenced Steve Krug's Don't Make Me Think, which is both an excellent book and another great oversimplification of UX. Both of these are built on the premise that whenever an application does what a user expects it to do, they don't have to think about it, and they consider it usable.

Mountains have been written about how to accomplish this, of course, and I'm not going to improve on the books that talk about those techniques, but as I pointed out with resepct to API usability, consistent behavior goes a long way. Even devices or applications we consider highly usable have a short learning curve, but one a user is invested in that curve, it's tremendously helpful to leverage that learning consistently -- no surprises!

A final consideration that applies in both these areas - when you find a case where "expected" really isn't well-definded yet, and can't be easily-derived based on other use cases (consistency), this is a golden opportunity. Rather than just taking their version of "expected" at face value, ask why they expected that (when possible). You probably won't be able to invest in a full "nine-why's" exercise, but keep that idea in mind. The more you understand about how those expectations formed, the more closesly you can emulate that thinking as you project other requirements.

Besides these examples, where else might you be able to apply "expected vs. actual"?

APIs have usability, too!

Usability has a long history in software. In fact, as I sat down to pen these words, I googled "history of usability ux" and turned up some scholarly articles going back over 100 years. Too far. In software, you can't go wrong starting with Apple, which puts the origin of UX in the mid 90's. Better.

But for much of this time, we've tuned in to software usability as experienced through our user interfaces by end-users. Today, there's more to usability than user interface design, and I'd like to broaden the discussion a bit.

Usability? What usability?

When you think about great user experiences, typically, we don't consider them to be great user experiences unless we start comparing them to lesser experiences. I believe this is a big clue into how we can apply UX more universally. I really think a lot of usability boils down to doing what a user is expecting you to do... so when you do it, most users will never even notice.

Think about it - when's the last time you gave a passing thought to usability for an application that was already behaving the way you wanted? There are scores of great books on how to achieve usability (I love Steve Krug's Don't Make Me Think, and Don Norman's The Design Of Everyday Things), but I really believe if you can manage to do what the user is expecting you to do, you're typically in pretty good shape.

The changing landscape of applications

Next, let's look at changes in applications and application design. You can probably see where this is going. Whereas once the only users of our applications were end-users operating via a Windows or web interface, cloud-native applications based on microservice architecture rely heavily on APIs to orchestrate, integrate and extend functionality. In many cases, APIs are the interfaces for services, and developers are our users. In this sense, APIs are the interface, and the user experience (UX) is found in the ease-of use of these APIs.

And what is it about an API we'd consider more usable? As with the generalized case above, I think the ultimate yardstick is whether the API behaves the way a developer would expect. I believe the popularity of RESTful APIs, for instance, isn't just because JSON is easy to work with -- it's because a well-written RESTful API is discoverable and predictable.

Note that discoverable isn't the same as documented - even correctly documented, which never happens. Discoverable starts with tools like swagger that expose live documentation of API methods and objects, but it connects with predicatable in an important way: as developers engage with your API and discover how some of it works, consistent behavior and naming creates predicability. When these two factors are combined, they reinforce one another and create an upward spiral for developers in which learning is rewarded and also helps future productivity. And yes - this exact relationship is part of understanding usability in a visual / UX context, as well.

Watch for more posts on API conventions and style soon, and watch for the ways these ideas support one another and ultimately contribute to Krug's tagline: Don't make me think!

State vs. Events

I had an interesting experience during an application migration a while back. The application used a no-sql data store and and event streaming platform, and an unforseen hiccup resulted in the event stream not being able to make the migration smoothly - we'd wind up starting up the "after" version of this application with nothing at all in the event stream, though we'd have all the no-sql data from thr prior application.

My initial reaction was panic. There's no way this could stand. But in examining the problem, not only was it going to be genuinely difficult to port the live event stream, it turned out our application didn't really care too much as long as we throttled down traffic before migrating state. That was the moment I came to really understand state vs. events, and how an event streaming application (or service) is fundamentally different than a traditional state-based application.

Retrain your brain

We've been trained to build applications with a focus on state and how we store it. This can make it hard to transition to an architecture of services where things are in motion -- events. There will always be circumstances in which at-rest state is appropriate and necessary -- reporting, for instance. But when analyzing software requirements, look for places where state-bias causes you to think about dynamic systems as if you're looking at static snapshots.

A shopping cart / order scenario is typical of this sort of state flattening. Consider the simple order shown here - an order with three line items. Ignoring how this order came to be formed, much information is potentially lost. It's possible this customer considered other items before settling on the items in this cart, and by considering only the state at the end of check-out, much value is lost.

This alternative stream yields the same final shopping cart, but in this case, we can see additional / alternative items that were abandoned in the cart. This view is more aligned to what you'd expect in an event stream like Kafka vs. a state store (SQL or no-SQL).

In this example, it's likely we'd want to understand why items (a) and (b) were removed - perhaps because of availability or undesirable shipping terms. We'd also want to understand whether the new items (2) and (3) were added as replacements or alternatives to (a) and (b). Regardless, the absence of the events in the first place prevents any such awareness, let alone analysis.

As a developer, you might relate to an example that hits closer to home. Recall the largest source code base you've worked on -- ideally one with several years of history. Now zip it up -- minus the .git folder, and remove the hosted repository. That static source snapshot certainly has value, but consider the information you've lost. You no longer are able to see who contributed which changes or when. Assuming you've tied commits to work items, you've lost that traceability, too. The zip file is a perfect representation of state, but without being able to see the events that created that state, you're in a very diminished capacity.

Hybrid approaches

Of course, many systems combine state and events, and the focus on one or the other could be visualized on a continuum. On the left, storing only events, you'd have something that looks like a pure event-sourcing design1. If and when you need to see current state, the only way to get that is to replay all the events in the stream for the period that interests you. On the right, of course, you have only current state, with no record of how that state came to be.

In practice, most services and almost all applications will be somewhere between the extremes of this continuum. Even a pure event-sourcing design will store state in projections for use in queries and/or snapshots to mark waypoints in state. Even more likely will be a mix of services with relative emphasis differing from service to service:

There's no right answer here, and the best design for many applications will employ a hybrid approach like this to capure dynamic behavior where it's appropriate without losing speed advantages typically seen in flattened-state data stores. When considering these alternatives, though, I'd encourage you to keep your historical bias in mind and try not to lose event data when it may be important.


  1. For more on event sourcing, see Martin Fowler's excellent intro as well as this excellent intro on EventStore.com. ↩︎

Dyamic Validation using FluentValidation

Validation in a C# class is an area where configuration / customization pressures can sneak into an application. Simple scenarios are typically easily-served with some built-in features, but as complexity builds, you may be tempted to start building some hard-coded if-then blocks that contribute to technical debt. In this article, I'll look at one technique to combat that hard-coding.

Simple Validation

Consider an application tracking cars on a dealer's lot. For this example, the car model is super-simple -- just make, model, and VIN:

public class Vehicle
    {
        public string Make { get; set; }
        public string Model { get; set; }
        public string VIN { get; set; }
    }

Microsoft has a data annotation library that can handle very simple validations just by annotating properties, like this:

        [Required]
        public string Make { get; set; }

When used with the data annotations validation context, this is enough to tell us a model isn't considered valid:

            var v = new Vehicle.Vehicle();
            Assert.IsNotNull(v);
            ValidationContext context = new ValidationContext(v);
            List<ValidationResult> validationResults = new List<ValidationResult>();
            bool valid = Validator.TryValidateObject(v, context, validationResults, true);
            Assert.IsFalse(valid);

This is not, however, especially flexible. Data annotation-based validations become more difficult under circumstances like:

  • Multiple values / properties participate in a single rule.
  • Sometimes a validation is considered a problem, and sometimes it's not.
  • You want to customize the validation message (beyond simple resx-based resources).

Introducing FluentValidation

Switching this to use FluentValidation, we pick up a class to do validation now, and the syntax for executing the validation itself isn't much different so far.

    public class VehicleValidator:AbstractValidator<Vehicle>
    {
        public VehicleValidator()
        {
            RuleFor(vehicle => vehicle.Make).NotNull();
        }
    }
...
            // testing
            var v = new Vehicle.Vehicle();
            var validator = new VehicleValidator();
            var result = validator.Validate(v);

By itself, this change doesn't make much difference, but because we've switched to FluentValidation, a few new use cases are a bit easier. Forewarned - I'm using some contrived examples here that are super-simplified for clarity.

Contrived Example 1 - search model

Again, this example isn't what you'd use at scale, but let's say you'd like to use the Vehicle mode under two different scenarios with different sets of validation rules. When adding a new vehicle to the lot, for instance, you'd want to be sure you're filling in fields required to properly identify a real car -- make, model, vin (and presumably a boatload of other fields, too). This looks similar to the example above where we required only Make. The same model could be used as a request in a search method, but validation would be very different - in this case, we don't expect all the fields to be present, but we very well might want to ask for at least one of the fields to be present.

        public SearchVehicleValidator()
        {
            RuleFor(v => v.Make).NotNull()
              .When(v => string.IsNullOrEmpty(v.Model) && string.IsNullOrEmpty(v.VIN))
              .WithMessage("At least one search criteria is required(1)");

            RuleFor(v => v.Model).NotNull()
              .When(v => v.Model != null)
              .WithMessage("At least one search criteria is required(2)");

            RuleFor(v => v.VIN).NotNull()
              .When(v => v.Make != null && v.Model != null)
              .WithMessage("At least one search criteria is required(3)");
        }
    }

In FluentValidate, this syntax isn't too bad with just three properties, but as your model increases in size, you'd want a more elegant solution. In any event, we can now validate the same class with different validation rules, depending on how we intend to use it:

            var v = new Vehicle.Vehicle() { Make = "Yugo"};
            var svalidator = new SearchVehicleValidator();
            var result = svalidator.Validate(v);
            Assert.IsTrue(result.IsValid);  // one non-null field here is sufficient to be valid 

            var lvalidator = new ListingVehicleValidator();
            result = lvalidator.Validate(v);
            Assert.IsFalse(result.IsValid);  // all fields must now be valid

Contrived Example #2:

Buckle up - we're going to shift into high gear on the contrived scale. For this example, let's say you're deploying this solution to several customers, and they have different rules about handling car purchases -- some expect to see a customer pre-qualified for credit prior to finalizing a purchase price, and some won't. Again, in this case, the example isn't as important as the technique, so please withhold judgement on the former. In addition to extending the Vehicle model to become a VehicleQuote model:

    public class VehicleQuote: Vehicle
    {
        public decimal? PurchasePrice { get; set; }
        public bool? CreditApproved { get; set; }
    }

We're also going to validate slightly differently. Here, note the context we're passing in and the dynamic severity for the credit approved rule:

    public class PurchaseVehicleValidator : AbstractValidator<VehicleQuote>
    {
        VehicleContext cx;
        public PurchaseVehicleValidator(VehicleContext cx)
        {
            this.cx = cx;
            RuleFor(vehicle => vehicle.Make).NotNull();
            RuleFor(vehicle => vehicle.Model).NotNull();
            RuleFor(vehicle => vehicle.VIN).NotNull();
            // dynamic rules
            RuleFor(vehicle => vehicle.CreditApproved).Must(c => c == true).WithSeverity(cx.CreditApprovedSeverity);
        }
    }

This lets us pass in an indicator for whether we consider credit approved to be an error, warning, or even an info message. The tests evaluate "valid" differently here, as any message hit at all -- even one with an Info severity -- will evaluate IsValid as false.

        var vq = new Vehicle.VehicleQuote() { Make = "Yugo", Model = "Yugo", VIN = "some vin" };
        var cx = new VehicleContext() { CreditApprovedSeverity = FluentValidation.Severity.Error };
        var qvalidator = new QuoteVehicleValidator(cx);
        var result = qvalidator.Validate(vq);
        Assert.IsTrue(result.Errors.Any(e => e.Severity == FluentValidation.Severity.Error));  // non-approved is an error here.

        cx.CreditApprovedSeverity = FluentValidation.Severity.Info;
        qvalidator = new QuoteVehicleValidator(cx);
        result = qvalidator.Validate(vq);
        Assert.IsFalse(result.Errors.Any(e => e.Severity == FluentValidation.Severity.Error));  // all fields must now be valid

Note that the same technique can be used for validation messages -- you could drive these from a ResX file or make them entirely dyanmic if you need to create different experiences under different circumstances.

Wrapping it up

The examples here are clearly light in depth, but they should show some of the ways FluentValidation can create dynamic behavior in validation without incurring the technical debt of custom code or a labrynth of if-then's. These techniques are simple and easy to test, as well.

You can find the code for this article at https://github.com/dlambert-personal/dynamic-validation

Customer Lifecyle Architecture

If you've gone through any "intro to cloud development" presentations, you've seen the argument for cloud architecture where they talk about the needs of infrastructure scaling up and down. Maybe you run a coffee shop bracing for the onslaught of pumpking-spice-everything, or maybe you're just in a 9-5 office where activity dries up after hours. In any event, this scale-up / scale-down traffic shaping is well-known to most technologists at this point.

But, there's another type of activity shaping I'd like to explore. In this case, we're not looking at the aggregate effect of thousands of PSL-crazed customers -- we're going to look at the lifecycle of a single customer.

Consider these two types of customers -- both very different from one another. The first profile is what you'd expect to see for a customer with low aquisition cost and low setup activity -- an application like Meta might have a profile like this, especially when you expect users to keep consuming at a steady rate.

Compare this to a customer with high acquisition cost and high setup activity -- this could be a customer with a long sales lead time or one with a lot of setup work, configuration, training, or the like. Typically, these are customers with a higher per-unit revenue model to support this work. Examples of a profile like this could be sales of high-dollar durable goods, financial instruments like loans, or insurance policies, and so on.

So, What?

Why, exactly, would we care about a profile like this? I believe this sort of model facilitates some important conversations about how a company is allocating software spend relative to customer activity, costs, and revenue, and I also think an understanding of this profile can help us understand the architectural needs of services supporting this lifecycle.

Note that this view of customer activity has some resemblance to a discipline called Customer Lifecyle Management (CLM). Often assoociated with Customer Relationship Mangement (CRM), CLM is typically used to understand activities associated with acquiring and maintaining a customer using five distinct steps: reach, acquisition, conversion, retention and loyalty.

Rather than just working to understand what's happening with your customer across the time axis, consider some alternative views. We can map costs and revenue per customer in a view like this, for example. Until recently, the cost to acquire a customer could be known only on an aggregate basis - computed based on total costs across all customers and allocated as an estimate. But as finops practices improve in fidelity and adoption, information like this could become a reality:

FinOps promises to give us the tools to track the investment in software you've purchased, SaaS tools, and of course, custom tools and workflows you've developed. Unless these tools are extremely mature in your organization, you'll likely need to allocate costs to compute these values. Also note that these costs are typically separated on an income statement - those occurring pre-sales tend to show up as Cost of Goods Sold (GOGS), while those after the sale are operating expenses (OpEx), and if you intend to allocate COGS, these will need to be interpolated, as none of the customers you don't close generate any revenue at all.

Less commonly considered is the architectural connection to a view like this. You might see a pattern like this if you're writing an insurance policy and then billing for it periodically:

Not only do these periods reflect different software needs, they also reflect different oppportunities to learn about your customer. Writing a new insurance policy is critically-important to an insurance company -- careful consideration is made to be sure the risk of the policy is worth the premium to be collected. For that reason, an insurance company will invest a lot of time and energy in this process, and much will be learned about the customer here:

On the other hand, the periodic billing cycle should be relatively uneventful for customer and carrier alike, and less useful information is found here -- at least when the process goes smoothly.

I believe the high-activity / high-learning area also suggests a high-touch approach to architecture. Specifically, I think this area is likely where highly-tailored software is appropropriate for most enterprises, and it also likely yeilds opportunities for data capture and process improvement. Pay attention to data gathered here and take care not to discard it if possible. Considering these activity profiles temporally may also lead us to identify separation among services, so in this case, the origination / underwriting service(s) are very likely different from the services we'd use after that customer is onboarded:

What's next?

I'm early in exploring this idea, but I expect to use this as one signal to influence architectural decisions. I expect that an acvity-over-time view of customers will likely help focus conversations about technologies, custom vs. purchased software, types of services and interfaces, and so on.

At a minimum, I think this idea is an important part of modeling services to match activity levels, as not all customers hit the same peaks at the same time.

What is Craftsmanship?

I was asked recently to define "craftsmanship" in software development, and I thought this would make a great topic for discussion.  You can obviously find a dictionary definition for craftsmanship, but much like with architecture, I think that when we attach the additional context of software development, craftsmanship introduces some important ideas about how individual contributors fit into a larger software development organization.

Software as Art

IMG_5891.jpg
Ladle (Photo credit: lambertpix)

The concept of craftsmanship in general emphasizes the skills of individuals, and connotes high-quality, highly customized work.  It is very much the antithesis of mass production.

This general understanding, when applied to software development, implies a very individualized experience -- indeed, if you think about buying a work of art or even something like a piece of furniture, "craftsmanship" would imply that the product you're buying is one-of-a-kind.  Nobody, literally, has another item quite like the one you're buying.  For businesses commissioning custom software, there's a nugget of truth here, of course -- one of the reasons to write custom software is the presumption that your business requirements are different in some way from everybody else's requirements, and your business demands the flexibility that comes with custom software.

Another key aspect of craftsmanship is the deep and broad skill set of the craftsman.  Just as you'd expect a hand-crafted product to be almost entirely the result of one man's work (to the extent that signing the work wouldn't be unheard-of), so software craftsmanship emphasizes individual skill and accountability.  And while craftsmanship doesn't necessarily imply apprenticeship, a craftsman is certainly expected to be highly experienced and able to draw on years of work in the field in order to produce the sort of high-quality work that's expected.

Business Implications

While high-quality, highly-customized software sounds very appealing from a business perspective, the deliberate pace and unyielding attention to detail we most often associate with craftsmanship is petrifying.  Business software needs to move at the pace of business, after all, doesn't it?  I really believe that the implied slowness of craftsmanship acts as an impenetrable stigma against its adoption, which I think is largely unfortunate.

One of the key benefits of the higher-quality code produced by accomplished craftsmen is that less time is spent on an ongoing basis recovering from the sins of shoddy coding as these systems grow and evolve over time.  High-quality code not only works better, it's more maintainable, so that initial effort can pay real dividends over the life of a long-lived system.  Where we sometimes talk about sloppy code as being high in technical debt, quality code can be seen as an asset for an organization.

Right now, software craftsmanship remains on the fringe of development practices, and thus tends to be favored in smaller shops, as well as in "green-field" projects much more so than in larger, more established environments.  It remains to be seen whether it eventually transitions into mainstream use the way that Agile has, but I think a lot depends on the effective communication of the real benefits of craftsmanship, as well as on mature tactics to introduce and manage craftsmanship in the organization.

My Take

Craftsmanship is about individual contribution.  It's about taking pride in your work, and it's about a relentless journey of improvement.  The scope of these experiences is best-suited to small groups of developers who can help one another grow and hold each individual accountable to the group.  This, all by itself, is a tall order.  It's not easy for anyone to expose himself to criticism, and that's a huge prerequisite for this sort of culture to form.  This sort of open sharing takes huge amounts of trust, and trust takes time to build.

IMG_8296.jpg
(Photo credit: lambertpix)

Another critical motivator for a "craftsman-like" organization is the sense that the team is building something that's going to last.  When you think of hard goods built by craftsmen, you equate the higher quality to a longer lifespan -- as a consumer, you may be willing to pay a premium for a product that will outlast a cheaper product because you know it'll last a long, long time.  In software, this understanding of quality has to be visible to the team as well as the customer, and the team needs to believe that this higher quality is valued.  Few things are as demotivating for a developer as seeing a great body of code mangled by someone who doesn't share the same capabilities or desire for quality.

It might seem at this point that I'm advocating an all-or-nothing strategy for software development, or that software craftsmen can exist only in small shops, but I don't believe either of these is necessarily true.  I believe that if craftsmanship is going to exist in a large organization, however, the organization needs to understand where it's going to be applied and it must organize its teams to support craftsmanship in targeted areas.  I think this prerequisite is well-suited to a component-based Enterprise Architecture, in which well-defined components or services are identified, invested in, maintained with care, and protected, and yes -- this demands an enlightened and skilled application of Enterprise Architecture.

Craftsmanship isn't a magic wand, or a buzzword, or a slogan.  Craftsmanship takes hard work, and it demands commitment from organizations and developers.  It's not right for every organization, and it certainly isn't right for all developers, but if you need software that stands the test of time, craftsmanship will pay dividends for the lifetime of that software.

The politics of software

This fall, as healthcare.gov imploded before our eyes, we saw any number of self-proclaimed experts chime in on why it coughed, sputtered, and ground to a halt, and how, exactly, it might be fixed.  My guess is that the answer is more complicated than most of them let on, but I'll bet there's a healthy dose of politics mixed in with whatever technological, security, and requirements issues might have surfaced along the way.

IMG_6589.jpg

It seems somewhat counter-intuitive to talk about politics at all in the context of software development, of course.  One of the aspects of software that really appealed to me when I entered this field was that for most problems, there existed an actual correct answer, and there are no politics in algorithms.  Ah, to return to the halcyon days of simple problems and discrete solutions!

Today's problems are more complicated than ever before, though.  Prodigious capabilities have bred complex systems and murky requirements under the best of circumstances, and no government project operates under the best of circumstances.  For those of you in private enterprise, you surely are aware of the struggles bred of competing interests and limited resources, but in a government setting, all those factors explode.  Funding is rarely connected directly to stakeholders, opinions are everywhere, and "deciders" are nowhere to be found.   Not to put too fine a point on this, but if we were to think of government-sponsored software as having been congealed rather than developed, we might be on the right track.  It's actually a small miracle when these systems work at all, given the confluence of competing forces working to rip projects in seventeen directions at once.

Think back for a moment on the early days of Facebook or Twitter or any of the other massive applications that serve as today's benchmarks of reliability.  They weren't always so reliable, of course.  Twitter, in particular gave birth to a famous "fail whale" meme in 2009 as it sorted out its capacity and reliability issues.  To be clear, twitter operates on a huge scale, but all it's doing is moving 140-character messages around -- there isn't a whole lot of business logic there, short of making sure that messages get to the right person.  It's pretty easy to gloss over some of those growing pains, but virtually every large system has them.  In the case of healthcare.gov, the failures happened under the hot lights of opening night and amid opponents who wanted desperately to see the system fail, and fail hard.

If you work amid politics like this, I'd love to offer a simple solution, but sadly, I have none.  Instead, I'd urge a little empathy; walk a mile in the footprints of developers, project managers, analysts, and testers on projects like this before you criticize too vigorously.  I can assure you that if you think this was a failure with a simple cause, you're mistaken.

Related articles

 

 

Enhanced by Zemanta