On Tooltips and Affordances

I just got a new smartphone - a T-Mobile Wing, in fact, and I like it a lot.  I've never used Windows Mobile for any extended length of time, though, so I'm still learning a few things.  This morning, while trying to figure out what a button did, I caught myself doing something astounding, and I gained a whole new appreciation of affordances.

This phone, if you're not familiar, is a touch-screen smartphone with a slide-out keyboard, so if I'm doing anything remotely complicated, I'm usually using a stylus to point to the screen.  This is sort of interesting all by itself, because in many ways the stylus acts as an interface metaphor for a mouse, which is, in many ways, acting as an interface metaphor for a finger.  It's no wonder parts of the UI are screwed up!

So I was looking at the Calendar - a screen I'd used a few dozen times, and I wanted to move to the next week.  I knew I could go to the menu to do this, but I thought perhaps there was an easier way to get there (I've been finding all sorts of those while learning to use this phone).  There was a little button on a little toolbar, and I didn't know what the button did.  So I took my stylus and held it still, poised a few millimeters above the touch-sensitive screen.

I was waiting, obviously, for the tooltip that never came.  I expected the button (especially a toolbar button), to have a tooltip.  This is an affordance of toolbar buttons, and my misguided gesture was a failed attempt to exercise this affordance.

There are several interesting observations to be had here, beyond the obvious, "Dave's a moron" one.  Here are a few that sprang to mind:

  • In the week or so I've had this device, I've adopted the stylus as a seamless analog to a mouse, to the extent that I don't differentiate the things that one can do that the other can't.
  • As users gain experience and UI metaphors grow in penetration, they become their own UI "primitive".  If I hadn't been using toolbars with tooltips on their buttons for the last (mumble, mumble) years, I would never have been conditioned to look for them on a new interface.
  • The smartphone (and the Windows smartphone in particular) is a horribly difficult device to support as a developer.  I've been quite aware as I've used this interface that some of the actions I perform with a stylus would be really difficult to support well on a phone without a touch-sensitive screen, and yet you see a lot of applications try to support both platforms with one version of software - a pretty tall order if your interface is non-trivial.  The whole portait-to-landscape pivot every time I pop our my keyboard would be enough all by itself to make this difficult.
  • Given the above, I think you have to tip a hat once again at Apple for the design of the iPhone.  Granted, their tight control of the whole hardware-software platform is the only way that this is possible, but I think it's still great execution.  This format is *not* easy.

Finally, from a design standpoint, this underscored for me the need to obtain real-world usability information about a system.  I'm no Alan Cooper, but I'm better than your average bear at UI design, and I never would have seen this one coming if I hadn't done it myself.  It's just too hard to relax all of your preconceptions and pre-loaded context when you're designing a user interface.  Real-world usage of a system will almost always turn up a few gems like this.  If you don't know about them, it doesn't mean they're not happening - it means you need to get your head out of the sand and pay attention to your users.

Reblog this post [with Zemanta]

4 Replies to “On Tooltips and Affordances”

  1. Today at Web Directions North two separate presenters used the quote that “the best designs are those that 'dissolve into behaviour' (Naoto Fukasawa)”. We know to hover to gain additional contextual information o a computer, but it is arguably a learned behaviour. Mobile (and particularly the iPhone) is really being seen as a ground-breaker in terms of intuitive, gestural design, but that's not necessarily due to the fact it's mobile.

    Look at the Wii –it is a gaming system that is unlike any other. Yet the game play came relatively easily, because it is more natural. Traditional video games, with joysticks and controllers, are the ones that require the learning. The gestural interface of the Wii just “makes sense”.

    Computers have become such a part of our lives, we assume that the same metaphors on a desktop or laptop continue onto a mobile device. But really, they are dramatically different on several levels (context, usage, means to engage). The iPhone is genius because it offered a more intuitive, simple way to interact with the technology, as opposed to trying to fit with the same old models.

    1. You're absolutely right – a lot of how we interact with a computer is learned. From the mouse itself (there's a scene in a Star Trek movie where someone tries to use it as a microphone) to tooltips, there aren't any “real world” equivalents to these UI fixtures, yet they've become common knowledge, and we now expect to find them (or equivalents) in other UI's.

      Fortunately, the fact that we learned these goofy concepts in the first place indicates that we can learn new controls, too. The first time I played with a Wii, I was absolutely blown away – there's just no way that the controller could be doing what it did. But after a while, I completely forgot about the controller – it dissolved, as you said. Pretty remarkable. Isn't it curious, though, that we've got a controller that can accurately model the hook in my bowling ball before we've achieved widespread voice control?

      Multi-touch is pretty cool, too. I saw Microsoft's Surface demo'ed at CodeMash, and it was pretty spectacular. I could really use one of those as an uber-remote / coffee table.

      I'm sure that accelerometer-based controls and multi-touch will eventually become standard on hand-held devices, and then we'll wonder why we can't wave our TV remotes to change channels.

      Thanks for the great comment.

  2. You're absolutely right – a lot of how we interact with a computer is learned. From the mouse itself (there's a scene in a Star Trek movie where someone tries to use it as a microphone) to tooltips, there aren't any “real world” equivalents to these UI fixtures, yet they've become common knowledge, and we now expect to find them (or equivalents) in other UI's.

    Fortunately, the fact that we learned these goofy concepts in the first place indicates that we can learn new controls, too. The first time I played with a Wii, I was absolutely blown away – there's just no way that the controller could be doing what it did. But after a while, I completely forgot about the controller – it dissolved, as you said. Pretty remarkable. Isn't it curious, though, that we've got a controller that can accurately model the hook in my bowling ball before we've achieved widespread voice control?

    Multi-touch is pretty cool, too. I saw Microsoft's Surface demo'ed at CodeMash, and it was pretty spectacular. I could really use one of those as an uber-remote / coffee table.

    I'm sure that accelerometer-based controls and multi-touch will eventually become standard on hand-held devices, and then we'll wonder why we can't wave our TV remotes to change channels.

    Thanks for the great comment.

Comments are closed.