Waving, not designing

I got a wave messaging power-up cover for my Nokia 3220 phone. It’s got a line of LEDs along the back of the phone, and when you wave it, you can spell out messages in the air. Check this out:

mindhacks-wave.jpg

(That’s me, by the way. I posted more about this to my other weblog, if you’re interested, but I’m going to continue here about embodied interaction and visual affordances.)

This phone is a pointer to something much larger: Embodied interaction is an up-and-coming trend in product and interaction design at the moment–why use just your fingers to select what’s on a display when you can use your whole body? It’s often easier, and makes more sense. Like, when you use a hammer, you don’t key into system to say “hit at point X with force F” and then stand back and let it happen, you just pick up the hammer and hit with it, using your body to judge strength and your eyes to judge position.

Modern technology has always acknowledged the constraints of the mind and body, of course, but always implicitly. The keyboard works because the keys don’t move around in function (the letter “B” is always the letter “B”), and because there aren’t too many: that works well with our memory. The keyboard isn’t too large, and that works well with the physics of how fast our hands can move, and what our handspan is. The icons on the screen feel like objects because we can move them round independently and they’re outlined, and that’s because of our built-in object recognition abilities. Windows on-screen can go behind one another because we realise that objects still exist when occluded by other objects, and buttons work well with shades of grey around them because we interpret shading and shadows and so the buttons look pressable.

Some bits of the computer interface take even more for granted. Imagine an alien using a laptop trackpad. They’d say “what? This is weird, you have to move a substance of a particular conductive property over this surface to make the cursor move round”–and that particular conductive property is that of our fingers, of course. (Try using your trackpad with a piece of plastic, it won’t work. It’s tuned to human flesh.)

Embodied interaction design does two things: It asks how we can have a more general interaction with our technology, so that instead of having to encode everything we do in terms of mouse movements and key presses, we can just do what we usually do. Also it asks: Just as we know the span of the hand to make a usable mouse or keyboard, what’s the handspan of the brain so we can make an interface which takes advantage of that?

ipod-scrollwheel.png One great example of this in action is the iPod scrollwheel. You move round it with your thumb (it’s a trackpad) to scroll, but actually it doesn’t move–unlike the earliest iPods which did actually move. There’s an interesting interaction design product there: With the early iPods, the moving scrollwheel was coloured white, the same as the rest of the mp3 player. You’d discovere the wheel moved just by picking up the device and touching it, the surface would yield. But with these new iPods, the scrollwheel doesn’t move, so how should the designers advertise this capability?

The answer takes into account affordances (term coined by J J Gibson, used by the designer Don Norman, and in the book in “Objects Asked to be Used” [Hack #67]. When you see a coffee mug, you don’t just see its colour, its shape, and the fact the handle is on the left, you see the possibilities of using it: immediately you see the mug, you left hand prepares to pick it up. By seeing the possibility of use, that action is represented in your brain, and because it’s represented/encoded in your brain, you become more likely to perform that action. (This is just the same as when you scratch your nose. Somebody talking to you will have that action of “nose scratching” encoded in their brain. Just having that encoding active makes them more likely to perform that same action subsequently – scratching their own nose in turn – without realising it.)

Well, what does the scrollwheel need to do? It needs to visually advertise the fact that you put your fingers on it, grip, and pull round. We usually use rubber for grips, and we usually make it grey (people’s fingers are carry a little dirt and with a lot of use the rubber gets grubby. Make the rubber grey to begin with, and the dirt doesn’t show up). The fact that you don’t actually grip the scrollwheel with your thumb is irrelevant. All the designers have to do is make you touch it once, and then the response of the interface will give you the feedback to confirm you made the right choice.

So that’s what the designers have done: The scrollwheel is a dirty grey. It looks like it has been touched a lot. It looks rubbery (although it’s not). It communicates the affordance of doing something when touched and dragged. And so, as a first-time user, that affordance gets encoded in your brain, and you have the bright idea of touching the iPod there, and: ta-da, you’ve done the right thing.

That’s what easy-to-use interfaces are all about, and that – in a small way – is what embodied interaction is about too: understanding what the brain does (looks for visual affordances) and making use of that knowledge to transform a non-moving piece of plastic into something which begs to be used.

Now, how did I get to iPods and affordances from wave-in-the-air phones? Ah, doesn’t matter, they’re all part of the same design trend.

3 thoughts on “Waving, not designing”

  1. ipod’s affordances

    Over at Mind Hacks there is a nice article about, well, a bunch of stuff. It mentions affordances, though, and the way that the designers of the iPod built its affordances right. So that’s what the designers have done:

  2. “it has all kinds of sensors inside it to tell where my hands are and how fast they’re moving”
    i was just thinking this might have applications for sign language! (after all it’s “signing” 🙂 put a “ring” on each finger and you may have the equivalent of viavoice or dragon naturally speaking at your fingertips 😀
    cheers!

  3. a couple of things;
    It’s often said that the icons and desktop windows environment is intuitive, but I’m not sure how much that’s really the case (for everyone). I know at least one individual whom I had a really hard time teaching the windows interface to (and it took me ages to realise why) – She literally didn’t ‘get’ the whole occlusion idea – she couldn’t distinguish where one window ended and another started, or when one was ‘behind’ another. I wondered whether (because it was so strong) whether there might exist some kind of visual dyslexia (vislexia? dysvisia?) that makes learning technology and interface metaphors much harder for some than others.
    Secondly, trackpads work for cats, as Pushkin has just kindly demonstrated to me.

Leave a comment