Bug fixes that are more than just bug fixes

It's that time again - time to go through the bug list and do some cleanup.  As I work through these bugs one by one, I've noticed that at least half of them end up being slightly more than just bug fixes.  As I crack the hood and peer in, sometimes I see that there's code inside that's not quite up to the standards of the rest of the application. 

When I see this, I generally try to fix the whole bug, and that means cleaning up (refactoring) the messy bits.  In some cases, you're working on an application that someone else wrote, and in these cases it's pretty easy to blame the mess on the moron who coded it in the first place.  But what if the moron was you?  How do you end up with code that needs this sort of attention if you wrote the stuff originally?  There are tons of great articles and books on refactoring, and they cover this area in much more detail, but here are some of the things you're going to run into when fixing bugs:

  • Code built upon code.  This is probably the most common reason for refactoring.  None of the code you wrote was wrong when you wrote it, but Agile development stresses development one feature at a time, and this sometimes means that you're going to add new code on top of old code and create situations where common functions need to be pulled out, base classes need to be created, and so on.  These things should really be done during development when the problem can be seen for the first time, but sometimes they're missed.  When you see these needs during a bug fix, don't ignore them - they won't go away on their own.
  • New patterns that haven't been implemented everywhere.  This is a similar concept on a broader scale.  Often, we find better ways of doing something during the course of a project - a way we want to organize code, a set of CSS styles that we want to implement all over, or something like that.  Again, we really should take the time to go back and retrofit all of our existing code when we agree on a new way to do something, but sometimes that doesn't happen.  And again, if you're working in a section of your application that was "grandfathered", take the time now to bring it up to spec.

Why bother cleaning up this other stuff if it was working?  For one thing, it's just the right thing to do.  If you don't take care of your code base in this way, your application is going to be legacy code before you go to production.  More fundamentally, though, is that the code probably wasn't working as well as you think if there are bugs logged against it.  Maybe the "old pattern" followed in this code is susceptible to problems that were fixed in newer code.  Maybe the code is more complex than it needs to be, allowing bugs to hide in the nooks and crannies.  Part of a good bug fix is to look for ways to prevent a bug like that from ever happening again, and very often, this means refactoring so that the "fixed" code can be painted onto the rest of your application.

There are times when you don't really want to rip up drywall in order to fix a surface blemish, of course.  If you're late in a release or making a production fix, you absolutely do not want to introduce any unnecessary instability.  This means you'll keep your fix as small as possible to limit the possibility of introducing new bugs with your fix.  Do yourself a favor, though - leave yourself a todo note in the code or a bug in your tracking system so you're reminded to go back at a more appropriate time and clean up.

The next time you fix a bug, take a little time to look around and look for the right fix - not just the quick fix.  You'll end up with a better code base in the long run.

Don’t bring an application home if you can’t feed it.

saddogsm About four months ago, I helped a client put an application into production.  This client has a mix of internal and external customers, and those customers are essentially captive (they have little choice about using this application).  As with any new application, there are infrastructure headaches, customer questions, training issues, and the like.

I was talking with someone today about a support issue, and I asked how they client was tracking calls, issues, and so on.  "Oh, we're not tracking them."  Ah.   And we talked about their customer service staff.  "Well, they're not so much customer service as they are department secretaries that take customer calls."  I see.  "But there's a help desk downstairs - I've seen it," I said.  It turns out that the help desk isn't staffed all the time.

Why?  No money.  And all the while, the customer calls are coming in, and the secretaries are doing what they can to help these poor customers, who don't have a choice but to keep soldiering on.

This organization is probably closer to the norm than we'd like to imagine.  It's pretty common, in fact, to look at the development price for an application and completely ignore the ongoing cost to maintain the system.

Purchased software is no less immune to this problem.  I've helped customers implement CRM systems, for instance, and a critical success factor is the ability an willingness for the customer to invest the time of their employees in the care of the system.  A CRM system is only as good as its data, and even the best systems require data maintenance and cleanup.  Duplicate records need to be merged, assignments need to be fixed, and dead data needs to be scrubbed.  Failure to invest in these activities will doom your system to obsolescence.

When your son or daughter wants to bring home a pet, most of us are comfortable explaining the responsibility of pet ownership.  "If you want a dog, you're going to have to take care of it."

But how many people apply this to their applications?

By the way, if you like the picture at the beginning of this post, check out http://www.imagechef.com/ - you can choose from dozens of stock photo backgrounds and add your own stylized text - pretty cool.

Do you know cause from effect?

I fixed a bug today - a bug that was introduced because a chunk of code was depending on side effects of another piece of code to work properly.  Depending on whether you're ever worked directly with a computer language, this may be a subtle transgression, but the problem is distinct and real, and here's the best part -- if you learn why this is a problem, you'll understand where this flawed thinking bites you in everyday business decisions, too.

So let's back up and look at a non-programming example. Let's say you go in to see you doctor because you're feeling under the weather. The doctor runs some tests and sees that your white blood cell count is higher than expected. A good doctor would know very well that there are lots of reasons this can happen, but someone a little less disciplined who's sitting at home playing "House, the Home Game" might shout out, "it's cancer!" before really having any evidence to support their crackpot theory.

Most of the time, mistakes like this are lots more subtle, but they always start with someone seeing an effect and thinking they know the cause. Sometimes, we guess and we get it right, but often, we don't.

How can we know when we've got it right? We need to relentlessly question the assumptions we make about the things we believe to be true. According to Alan Axelrod in his book, Patton on Leadership, Patton would frequently ask, "How do we know that?" His lesson is that we need to consider our sources of information, because some are intentionally misleading, others are unintentionally incorrect, and a few are reliable. If we can tell the difference, we're doing well.

Instead of asking, "how do we know," we could also ask, "why?" In a recent blog post, Joel Spolsky talks about his company's use of the five why's. Made famous at Toyota, and attributed for part of Toyota's legendary quality, the Five Why's are a structured approach to finding the root cause of a problem. In the case of a business problem, this might mean following a problem beyond our own department to uncover inter-departmental workflow issues, while in software development, this sort of root-cause analysis might lead us to find weaknesses in our architecture. The key here is not to stop asking questions when you get your first answer, because you're probably still missing something important.

In the case of my bug, the code was checking the state of a variable, but the variable wasn't a definitive indicator of the condition we were interested in. Rather, it was a variable the corresponded to the condition we were looking for most of the time. When this pattern changed, our code broke. The fix? Change our code to trap for the condition itself, not a coincidental side-effect.

Lest I lead you to believe that these problems are easy to spot, I'll assure you that many aren't. I've gotten caught a few times with the .Net framework, especially, when I've looked at the value of a .Net framework property while debugging and made assumptions about the general behavior of the property. By believing (falsely) that I knew how the framework was going to work, I left myself open to bugs.

Although it's not easy to train yourself to think this way, a questioning mindset can pay off in a better understanding of things that are really true versus the things we merely believe to be true. There's a world of difference.

Reblog this post [with Zemanta]