Migration headache: MSDataSetGenerator failed

Here's a real treat.  I'm working on yanking some old .Net 1.1 code forward to 2.0, including a bunch of DataSets, so I was grabbing these old 1.1 DataSet XSD files, pasting into the new project directory, doing an "Add Existing" in the new project, and letting VS2005 re-generate all the "accessory" files that accompany the XSD.  This wasn't really exciting, but it was easy, and it was working really well until I got to one that just wouldn't import cleanly.
The first thing I noticed was a build error:

Custom tool error: Failed to generate code. Failed to generate code. Exception has been thrown by the target of an invocation. Input string was not in a correct format. Exception has been thrown by the target of an invocation. Input string was not in a correct format.

Nice.  Thanks for all the detail here, cuz' that just makes the problem obvious, right?  Not so much.  The next thing I noticed was that I wasn't getting a .cs file generated for this XSD - clearly a related problem, so maybe I can get a better error message out of the designer.  So I right-clicked the XSD and hit "Run Custom Tool", and got a wealth of new information:

I hadn't really gotten my hopes up, but still...
As I walked away muttering product management suggestions for Microsoft under my breath, I ran into a co-worker who suggested breaking this beast down and adding one part of the XSD at a time until it barfed.  Of course!  This ended up being somewhat time-consuming, but it did, in fact, point out the place where the XSD was broken.
The 1.1 schema had a flag that could be used to indicate the value to use when null values turned up.  I therefore, had a couple of lines in my XSD that looked like this:

<xs:element name="profile_type_id" type="xs:int"
    minOccurs="0" codegen:nullValue="" />

Note the value -- empty string, which is not a default one typically associates with int's.  The 1.1 designer seemed perfectly happy with this, but the 2.0 designer wasn't.  I'm still not real thrilled with the punt-on-error approach exhibited by the 2.0 designer - that's a great way to blow a bunch of time chasing down inane syntax problems.

But lucky for you, you'll know just where to look, won't you?

Visual Studio 2005 install madness

I'm working with a new computer and a new setup where I'm running all development tasks in a VM. It's been pretty interesting so far - I'll blog about my experience with this setup later. Right now, though, I'm getting ready to chuck the PC out the window because I can't get VS2005 to work right.

When I sat down to work on this new machine, VS2005 was already installed - Great! But when I opened an existing solution, it told me it couldn't load a web project in that solution. No big deal, right? I looked at the project, and sure enough, it was a Web Application Project. Since this is a project type that was made available after VS2005's release, I figured the project type just wasn't installed, so I went to find the installer. No sooner than I started that hunt than I saw that Web Application Projects were folded into VS2005 SP1, so I went and grabbed that. In the mean time, I double-checked that I didn't have Web Application Projects installed by trying to create a new project in VS2005 -- no Web Application Project was available.

I started the VS2005 SP1 install, and I was surprised to see that my machine was pretty sure that SP1 was already installed. Since I was also pretty sure I didn't have the right stuff on my machine, I let SP1 reinstall, and then went and tried to load my solution again. Boom. Ditto on creating a new project. Things were starting to get old.

I spent the better part of a day trying to find the problem, and I finally stumbled on this blog entry from Eric Hammersley. Alas, there were all sorts of people in the same boat. I started working down the list of fixes, repeatedly getting my hopes up and then having them dashed. I mumbled a lot of not-so-nice things about VS2005 under my breath.

My fix turned out to be permissions-related. After I set the permisisons for my Visual Studio install directory to give the Creator-Owner full permissions, then re-ran the SP1 setup, I got my Web Application Project support back - just like magic.

From the looks of this episode, I'd be wary of running any Visual Studio install that you didn't install yourself. Not exactly the behavior you'd like to see out of a company that's trying to erase a reputation of security ambivalence, but it is what it is.

I’m a happy LogMeIn user

logmein Like many people, I know I am quick to slam products when they don't work well, but fail to give kudos when they do work well.  Today, I'm going to throw a tally on the good side of that scorecard for LogMeIn. I've used LogMeIn for around a year, now, and I've yet to have it let me down.  Not too bad, considering where some other software vendors set the bar.

If you haven't heard of it, LogMeIn is a remote access / remote control product - in fact, a suite of products, really.  When I initially set up my account, there were basically just "Free" and "Premium" options, but they've added new products at a dizzying pace.  They're up to something like ten products now, including one I didn't even know about until I looked today.  Most of these products seem to be specialized derivatives of their core technology, which is a really nice approach to see - sort of the opposite of the "throw everything in one basket" approach that many products take.  Despite the breadth of products, all of my experience has been with the basic free product.

First, the bad news.  You have to run an installer on each PC you wish to control, and you need to install an ActiveX control in the browser of any PC you wish to use as a "viewer".  I'd love to see the viewer work without an install (some administrators won't permit this, of course), but this notwithstanding, I've never had the need to install these pieces hinder me.

There's a lot more good news than bad news, though.  Here are just a few of the things I've noticed (and liked) so far:

  • The "host" component prompts you when there's an update available, but it doesn't stop running until you upgrade.  This can really save you when you're logging in to a "headless" PC remotely, so the first chance you have to see that the host wants to run an update is when you've remoted into the host.  It would be a real drag to discover that a whole bunch of PC's just became unreachable because they were waiting to be upgraded.
  • I've never had any problems traversing firewalls, proxies, etc.  File this under "it just works", and I'm grateful for it.
  • It works on my Windows Mobile phone!  I'm using Win Mobile 6 on a T-Mobile Wing, and it works like a charm.  I certainly wouldn't want to work for any extended time this way, but it's really, really great to be able to get to the one stupid little button on your desktop that I desperately need to press when I'm sitting in a coffee shop on the other side of town.
  • Screen scaling.  This is one of those little usability features that escapes attention at first glance.  Many remote control apps will let you view the host's screen at exactly the resolution of the host, meaning that if you're looking at a screen larger than your client's window, you're doing a lot of scrolling to see the whole screen.  LogMeIn, though, scales your host's screen to fit in your client's window - no matter how large the window is.  That means that if you're watching a remote PC to see if a job's done, or something like that, you can do so in a small window.  The resolution suffers a little, of course, when you scrunch the screen down, but it's really pretty remarkable how readable the screen stays as you resize it.  This is a really nice touch.
  • Did I mention that it just works?

I've tried quite a number of other remote-access products over the years, including VNC and Windows Remote Desktop (running over various tunnels and VPNs), and LogMeIn beats them all when it comes to reliability, and when it comes right down to it, that's the #1 feature for remote access, isn't it?  If you haven't tried LogMeIn yet, go give them a shot.  You won't be disappointed.

Working with Rocky Lhotka’s CSLA Framework

csla_logo1_42 I recently wrapped up development of a new application using Rocky Lhotka's CSLA Framework, and I really liked it a lot.  I think there's a fair bit of mis-information and confusion circulating about this software, so while it's all fresh in my memory, I threw together some notes on the things that worked well, and the ones that didn't work so well, too.

Bear in mind that this material is offered at a high level - if you're interested in really learning the framework, I'd encourage you to buy the book, download the framework, and dive in - the water's fine.

Benefits

One of the obvious benefits of this framework is that Rocky's written a book to document the framework (multiple, if you count C# and VB variants, plus updates for new releases).  The book does a pretty good job of walking you through the construction of the framework, explaining "why" as well as "how".  You can certainly learn the framework without reading the book, but if you like to do your tech learning out of a book, this one is a great read.

The CSLA framework takes a very OO approach to design.  Where a lot of code-gen or "factory" approaches lay the entire model open and trust developers to do the right thing, CSLA enables and, in fact, encourages defensive programming with respect to the object model.  You get to actually start using scoping modifiers to protect the inner workings of your classes, which is really nice.  In my opinion, this encourages a much more robust object model because you can lock down ctors and setters where it makes sense.

This is feasible because you're using the same objects everywhere you need them, regardless of physical tier, and this is actually one of the places where peoples' eyes start to glaze over.  Rocky actually does a great job of explaining this in his book, but a lot of people see examples with code from multiple logical layers in the same class file, and conclude that CSLA precludes the use of multiple physical tiers.

In fact, the framework actually makes n-tier development much easier and more flexible becuase it abstracts the protocol across layers.  I can take the same code, in fact, and configure it to run all on one server (without incurring network traffic across tiers), or as a BL and DL connected via HTTP WS, WCF, or remoting.  It really is just a config change to switch among these modes, which is pretty nice.

Some of this flexibility comes from the use of reflection in some spots, and that can put people off, but IMO, the benefits far outweigh the costs unless you're trying to trade stocks or land the space shuttle.  Comparing two applications developed for the same client and deployed into the same environment, the second app (using CSLA) is noticeable snappier, mainly because we've got fewer layers of junk to sift everything through, and because we've got a lot more control over how we optimize data access.

Data Binding is usually listed as one of the benefits of CSLA, but I'd broaden this to include support for all the MS interfaces that you'd want to support if you were starting from scratch.  It does authentication and authorization, for instance, using the same MS interfaces you'd expect to find in use elsewhere.  It does validation using the standard interface, and so on.  The effect of all this is that stuff just *works*, instead of being a constant struggle to go build something whenever you want to interface with something new.

The CSLA objects have excellent support for "state", including IsDirty-type infrastructure so the objects know when they need to be saved and what sort of CRUD operation that might be.  They're also capable of supporting n-level undo, though I haven't needed that.

Finally, the object have good support for validation, including the ability to declaratively write business rules that drive "IsValid" and the ability to save your object.  These rules can be reasonably sophisticated, since you're writing them in code.

Drawbacks

I've seen a few.  First, it seems like I was constantly fighting FUD from people who didn't know anything about the framework.  As I mentioned earlier, there are a lot of misconceptions about this framework - maybe because of its "legacy" roots.  I saw very little real merit in these arguments, however - they just don't hold water when measured against facts.

Second, I saw a couple of places where I got caught building an object based on one base class, and it turned out later that I probably should have been using a different base.  At that point, you don't get any additional help with refactoring that doesn't already come in the box.  There are modeling or code-generation tools that can spew out CLSA code, and my experience here might have been different had I been using one of these.  In any event, I was still way ahead using the framework.

Finally, I saw some places where validation didn't cascade very elegantly down to complex children of parent objects.  This isn't the end of the world, since you can override anything by hand, but it would have been nice to get just a little more help from the framework on this one.  At moments like these, it's hard to resist the urge to just grab the CSLA source code and fix it.  That's an absolute last resort, of course, since it would make upgrades much more difficult.

In case you're looking for more info on CSLA, I've pasted a few representative "community" opinions below - feel free to take a look at any or all of them.  FWIW, Rocky tends to do a great job of explaining himself in forum posts, but I've yet to see him put together an effective "elevator speech" for CSLA, which is unfortunate.

CSLA Framework benefits:
http://www.primos.com.au/primos/Portals/0/Tech%20articles/CSLA%20WIIFM/What%20could%20CSLA%20do%20for%20me.html
Blog comments:
http://www.primos.com.au/primos/Portals/0/Tech%20articles/CSLA%20WIIFM/What%20could%20CSLA%20do%20for%20me.html
Blogger's pros and cons:
http://claytonj.wordpress.com/2006/10/17/using-csla-as-an-application-framework/
InformIt article:
http://www.informit.com/articles/article.aspx?p=770361

Here’s a clue that your API stinks

I was doing some development recently using Rocky Lhotka's excellent CSLA framework.  This framework lets you easily connect multiple tiers of an application with HTTP Web Services, WCF, Remoting, or Enterprise Services (you can even change transports just by changing your config files - cool!).  In my case, I was using WFC, and I had occasion to change a value in the config file.

The next thing you know, i was looking at this error box (just to be clear, this is a WCF error - not a CSLA error).

Now, in this case, the error message led me right to the source of the problem, so A+ for the efficacy of the error dialog.  D- for the API, though.  If you already know that two values in the config file have to be the same,

[Pause for effect]

Why... are... there...

[Another pause.  At this point, I'd like you to picture Lewis Black belting out this last bit, his outstretched finger pointed toward Redmond and quivering slightly.  If you're not a Lewis Black fan, use Jackie Gleason from Smoky & the Bandit.]

TWO   CONFIG   VALUES???

I'll allow for the possibility that this was lots more cathartic for me than it was for you, but I think you've got the picture.  If you already know the answer, then don't ask me the damned question, ok?  If you already know that both of these config values have to be the same, then one of them has to die.

The point in all this ranting is that API's have usability, too.  If your users are developers, and you do things in your API that frustrate your developer-users, they're not going to be big fans of your product.  Microsoft has traditionally been head-and-shoulders above the competition when it comes to API usability, and I truly believe this played no small part in the rise of Windows.  A better API means developers will enjoy writing for that platform, which implies more apps for users and more success for the platform.

Ignore this rule at your peril, though, because unhappy developers will avoid your platform whenever possible, and you'll soon find you don't have nearly as many people complaining about your API anymore.

Or, you could try this:

Reblog this post [with Zemanta]

I Can Haz Mashups?

It's a popular misconception that if you throw some SOAP, WCF, or J2EE service layers on top of an application, it's automatically easy to integrate.  I place the blame for this misinformation squarely on PHB's and Gartner conferences, because this is simply a case of management by buzzwords.

A good friend of mine used to tell me all the time that the reason we're entertained by a dancing bear isn't that the bear dances well - it's that the bear dances at all.  There once was a time when just being able to talk to a mainframe routine from Java, or talk to Java from .Net, or talk to a routine on one of your partners' computers was a major technical triumph.  Web services made the bear dance, it's true.

Now that we've seen the bear dance, though, we're looking for a few new steps and a little more rhythm.  We want the bear to dance well.  We want quick integration.  We want security.  We want reuse across our enterprises, and yes, Virginia, we want mashups like those kids are doing on face-space, or my-book, or whatever the hell it is they're using.  We drunk the Gartner kool-aid, and we want some of that Web 2.0 stuff, too.

But sadly, integration isn't something you pull out of your Web 2.0 Conference swag bag like a door prize.  You will, in fact, have to know what you're doing to achieve any really meaningful results.  Here are a couple of places where conference-oriented development gets tripped up.

Misconception #1: If we're using web services, no additional effort is needed to make our systems integrate well.

Even though nobody actually makes this claim in black and white, lots of executives (and even some software architects) seem to believe it at some level.  Even Garter will tell you that SOA is a lot of work, but somehow, that's not the message that people seem to internalize.

The truth is that integration on an Enterprise scale is still hard work, and your results are still dependent on planning and execution.  It's true that web services make integration easier than it was in days gone by, but you won't end up on the cover of CIO magazine just because you can spell WSDL.

This may not be a popular point of view, but I believe you've really got to treat integration as a product in its own right in order for it to truly be successful.  The API's you create have users, and that means you've got to consider the usability of your API's just as seriously as you'd consider the usability of a flagship application.  I won't go into detail here on what usability means for an API, but it's really comprised of exactly the same factors as application usability.  If you don't pay attention to the people who are going to use your API, your integration isn't going to be much more successful as an application that all your users hate.

Misconception #2: We can mask our business problems by using web services.

Sorry again.  I've seen this a bunch of times, and it never ceases to amaze me.  If you've got a business problem you can't seem to solve without the benefit of a web service, it's unlikely that a web service (by itself) is going to fix your problem.  Data problems?  If you've got data integrity rules to enforce, a web service is great at enforcing them, but if your data is already messed up and you don't have clear algorithmic rules to fix the data, a web service isn't going to help you very much.

Similarly, if you've got a well-defined business process, a web service may be just the thing to expose that process to other applicaions.  If your business process isn't well-defined, or (worse) if your business users can't agree on a process or other requirements, a web service won't bridge that gap.

Business issues like these really have to be raised up and dealt with in cooperation (at least) with your business leaders.

So embrace SOA and web services, yes.  But don't blindly trust that they're going to fix your problems for you.

Reblog this post [with Zemanta]

Twitter: Snatching Defeat from the jaws of Victory

It's rare these days that a Web 2.0 startup lands a round of financing, and the funding is completely overshadowed with bad news.  Twitter isn't just shooting itself in the foot - it's mowing itself down with a chain gun.

Problem #1 is uptime, or lack thereof.  Anyone who's been on Twitter over the last month or so has experienced a *severe* up-again, down-again roller coaster ride at Twitter.  Every day, it seems like there's an outage, and some of them have lasted for hours.  I've seen by far more blog traffic about "Twitter's down again" than I've seen talking up the service itself, and that's not good.  Let's not forget that Twitter is still solidly in "early adopter" territory, and they're not going to attract too many mainstream users if the site's down.

Twitter has taken some heat for lack of transparency during this episode, and even that has been entertaining.  They're blogging about what's going on (to some degree), but as a commenter pointed out, they're Twitter for Pete's sake -- Twitter about it!! It's slightly disturbing that this didn't come as second nature to these guys, since this is what they do.  I've also noticed that the manner in which they "splat" is singularly ungraceful.  When Twitter crashes, I see (on my mobile browser) the PHP equivalent to a blue-screen, not a buttoned-down page explaining that there's a problem and someone's working on it.  Here's another take on their error reporting.

Now, I understand that one of the purposes of this round of funding they just landed is to shore up their infrastructure and get some of this stuff ironed out, but I wonder if it'll happen in time.  Fixes for problems like these, in my experience, are a long time in coming, and an infusion of cash today is still months away from paying dividends once you use the money to hire help, buy servers, configure the servers, install the servers, fix the software, test the software -- you get the picture.

But as bad as this all sounds, uptime may not be Twitter's biggest problem.

Read this.

From everything I've seen so far, Ariel's account of events is undisputed and accurate.  Twitter's response is pathetic.  After sandbagging for months, they're now hiding behind the "we're an infrastructure company, not a content regulator" argument.  Totally lame, and totally inadequate.

I'm not aware of anyone who's learned about this and hasn't been outraged, and I'm not aware of anyone who's defending Twitter's response to-date.  Sometimes, stories like this have a grey area where you could argue either side of the story, but that's not true in this case.  Twitter is simply alienating users as fast as this news travels, and it's traveling fast.

The only good part about this story (for Twitter) is that it's taking attention away from their downtime.  As far as silver linings go, that's pretty lousy.

Aside from just condemning Twitter, though, why would we, as software professionals, care about Twitter's agony?  That's easy - we want to learn from their mistakes.

The Ariel Waldman mistake is big and obvious, and the lesson is pretty simple, too: Don't be a jackass.  If there's a problem, acknowledge the problem and meet it head-on.  Don't lube up and try to slither past the problem, because it's going to keep haunting you until you kill it dead.  The stall tactics, the wordsmithing, the "we're just a humble little infrastructure company" line are all just ways to bide time in the hopes that the story will just go away on its own.  How's that working out, Twitter?

The second lesson seems just as simple, but I don't believe it really is.  "Don't build a crummy infrastructure," you say.  Sure, but how?  The data model Twitter provides isn't as easy to model as you might initially believe.  Here's an excellent discussion of what their architecture might look like.  So it's very possible that they're facing some architectural challenges far beyond those most of us see in our careers.

Add to that the startup culture of "features first, then users, then scaling and monitization," and you can end up with a code base that works really well at low volumes, but just breaks apart when you start putting a real load on it.  A friend of mine used to tell me all the time that "the thing that makes you successful today will kill you tomorrow."  In this case, that means that our architectures have to be able to absorb change on a wholesale scale.

I really hope Twitter figures this out and pulls through.  They've got a lot going for them, and they're close to making the leap to the next level, but they need to get with the program in a big way, like *right now* if they're going to pull out of this nosedive.

Reblog this post [with Zemanta]

Roman Wagon Wheels

It's said that the gauge (width between tracks) of American railroads can be traced from bureaucracy to bureaucracy back to the width of Roman war chariots.  It turns out that this is just another urban legend, but if you've ever worked in an organization of any size, you've experienced the organizational inertia that makes this legend so plausible.

I ran into a great one today, for instance.  I'm doing some work in a place where I don't get to set the standards (yes, it's a government agency).  One of the real winners is a standard that mandates that all SQL queries for an application be stored in an XML file, with the queries and their corresponding parameters specified.

I actually learned about this one quite some time ago, and of course, I asked this requirement existed.  This is a proprietary structure, and none of the modern conveniences of .Net development know a thing about this file, so it really slows things down.  The file also tends to grow large and unruly over time, and it's difficult to find anything in it because, again, .Net doesn't know anything about its syntax.

The reason, I was told, is that the DBA's want to be sure that all of the queries for the entire system could be found in a single place so they'd be easy to review and tune.  I bought this, hook, line, and sinker.  I may also be interested in a bridge if you have one for sale.  After paying close attention to the ongoing flurry of intense DBA activity that consistently failed to materialize around this file, though, I began to have my doubts.

So today, I was debugging a really nasty instance of this mess -- a search query.  The "advanced search" screen has a collection of fields that the user can search, and like most advanced search screens, the user only fills in the fields she wants to search - the rest are left blank.  When we search, we query on the fields containing data and ignore the rest.

But that's not how our search works.  Since we've got to store our queries in XML and substitute parameters to alter the queries, we have a huge, awful SQL statement that joins all of the tables that might possibly be used in a search.  Not very efficient, and not certainly not easy for developers.  This mechanism has been difficult, slow, and error-prone since day one.

On top of everything else, the production Oracle servers have really been struggling lately, and the DBA's are looking under rocks for any performance improvements they can find.  Since this bungled search technique is really, really inefficient in terms of database execution speed (in addition to being a gaping black void of development time), I thought I'd run this by the DBA's again.  After all, they'd probably seen the "normal" queries as they're encoded in XML, but maybe they'd never seen one of these search monstrosities, so I asked.  "What's up with the XML file?" I asked.  "Haven't you seen what that does to these search queries?  Haven't you seen all the extra joins we don't really need?  Wouldn't you like those database cycles back?"

Do you know what the DBA said?  "Tain't me that's forcing the XML thing - go find another tree to bark up."

Now, that means that the "architecture people" are blaming the use of this file on the "database people", and the "database people" are blaming it on the "architecture people".

Sigh.

Yes, this is a government agency, but I'd bet you'll run into this in some other places, too, because this is one of the universal truths of organizational dynamics.  I haven't run this one all the way to ground yet, but I clearly should have known better.  I don't fall for "that's the way we've always done it" anymore, but I'm going to have to work on my screwy-reason radar so I don't let another one like this by me.  Pay attention so you don't buy any bridges (or wagon wheels), either.

Just for kicks, by the way, here's a version of the original "chariot wheels" story that claims that the railroads gauge is more than just a coincidence.  You can draw your own conclusions - the XML query file still stinks.

On Tooltips and Affordances

I just got a new smartphone - a T-Mobile Wing, in fact, and I like it a lot.  I've never used Windows Mobile for any extended length of time, though, so I'm still learning a few things.  This morning, while trying to figure out what a button did, I caught myself doing something astounding, and I gained a whole new appreciation of affordances.

This phone, if you're not familiar, is a touch-screen smartphone with a slide-out keyboard, so if I'm doing anything remotely complicated, I'm usually using a stylus to point to the screen.  This is sort of interesting all by itself, because in many ways the stylus acts as an interface metaphor for a mouse, which is, in many ways, acting as an interface metaphor for a finger.  It's no wonder parts of the UI are screwed up!

So I was looking at the Calendar - a screen I'd used a few dozen times, and I wanted to move to the next week.  I knew I could go to the menu to do this, but I thought perhaps there was an easier way to get there (I've been finding all sorts of those while learning to use this phone).  There was a little button on a little toolbar, and I didn't know what the button did.  So I took my stylus and held it still, poised a few millimeters above the touch-sensitive screen.

I was waiting, obviously, for the tooltip that never came.  I expected the button (especially a toolbar button), to have a tooltip.  This is an affordance of toolbar buttons, and my misguided gesture was a failed attempt to exercise this affordance.

There are several interesting observations to be had here, beyond the obvious, "Dave's a moron" one.  Here are a few that sprang to mind:

  • In the week or so I've had this device, I've adopted the stylus as a seamless analog to a mouse, to the extent that I don't differentiate the things that one can do that the other can't.
  • As users gain experience and UI metaphors grow in penetration, they become their own UI "primitive".  If I hadn't been using toolbars with tooltips on their buttons for the last (mumble, mumble) years, I would never have been conditioned to look for them on a new interface.
  • The smartphone (and the Windows smartphone in particular) is a horribly difficult device to support as a developer.  I've been quite aware as I've used this interface that some of the actions I perform with a stylus would be really difficult to support well on a phone without a touch-sensitive screen, and yet you see a lot of applications try to support both platforms with one version of software - a pretty tall order if your interface is non-trivial.  The whole portait-to-landscape pivot every time I pop our my keyboard would be enough all by itself to make this difficult.
  • Given the above, I think you have to tip a hat once again at Apple for the design of the iPhone.  Granted, their tight control of the whole hardware-software platform is the only way that this is possible, but I think it's still great execution.  This format is *not* easy.

Finally, from a design standpoint, this underscored for me the need to obtain real-world usability information about a system.  I'm no Alan Cooper, but I'm better than your average bear at UI design, and I never would have seen this one coming if I hadn't done it myself.  It's just too hard to relax all of your preconceptions and pre-loaded context when you're designing a user interface.  Real-world usage of a system will almost always turn up a few gems like this.  If you don't know about them, it doesn't mean they're not happening - it means you need to get your head out of the sand and pay attention to your users.

Reblog this post [with Zemanta]