I saw a pretty entertaining post this week on one of the Channel 9 forums. A guy there reacted to some Windows Home Server bug counts from their beta program. Personally, I found this guy's reaction to be humorous all by itself:
I definitely like this topic:
Of the bugs that have been addressed, Sullivan said that only 15% have actually been fixed. The remainder are issues that are in the server by design (13%), not reproducible (21%), will be postponed to later versions (11%) or likely won't be fixed (7%).
Bloody hell... ALL REPRODUCIBLE BUGS MUST BE FIXED! (Or never made!). There is no such thing as "bug by design". It was invented by big corps to cover own failures, as "excuse".
An energetic, uninformed rant is always worth a little chuckle, but in this case, my interest extends beyond the first reaction. You see, this person represents a real constituency out there who know enough about a software development process to be dangerous.
To ship software, just fix every bug until none remain. Any software that ships with > 0 bugs is proof that the software company is just a money-grubbing corporate blight on the face of the industry.
Ok, fess up - how many of you have really shipped commercial software? How many have shipped software out for testing, and then fielded unsolicited "help" from these testers. I've done these things on a smaller scale than Microsoft is doing here, and the numbers don't startle me a bit.
In fact, if you develop commercial software or plan to do so someday, I'd recommend you start watching the Windows Home Server Blog. I know I was critical of the opacity of the initial marketing launch for this product, but the subsequent transparency of the development process is really impressive. Pay attention, because you're getting a glimpse inside the sausage factory.
You see, when you open testing up beyond a tightly-controlled QA environment, there are only two ways to get a grip on your bugs. You either throttle them before they come in, or you sort and categorize them after they're reported. The former method is almost always impractical, not to mention making the tester feel oh, so valuable ("Thanks for the report. I'll just put it here on this stack to review before I enter it into the system. We'll be sure to let you know if it's entered as a 'real' bug").
Categorization is really your only option for a beta program like this, and when you let anybody report any bug, you're going to get a lot of chaff. Again, the distribution reported by the WHS team looks pretty reasonable, and the team is doing a great job of explaining the numbers along the way.
I guess it's hard to fault Microsoft for being opaque sometimes when openness like this is turned against them by a misinformed public. Still, I hope we see more of this face of Microsoft.