Microcosmographia by William Van Hecke

I Believe That Force Touch is Our Future

Watch people handling a touch device. There’s this awkward gesture where they splay all their fingers out as far as possible to get them out of the way, then dip their index finger down toward the screen, gingerly tap the thing they’re aiming for, and then swoop outta there before they accidentally hit something else. Half the time they have to repeat the whole process again because they missed the ill-defined button boundary. So much of our cognitive and motor resources are devoted to avoiding interactions that are too easy, that it’s actually harder to do what we really want.

Compare that to how we interact with ordinary objects. Pretty much nothing that you handle in the “real world” demands that you be so careful about whether your skin is going to accidentally brush against part of it and, like, send an unedited email screed out across the world or something.

Okay. So Apple Watch introduced this Force Touch thing. Touch for one result, touch a bit harder for a different result. That’s a smart way to get one more level of expression from a tiny touch screen.

On a bigger touch screen, like your iPhone or iPad, you might start imagining how Force Touch could make interaction more expressive. Kinda like when we first met iPad, we started dreaming up all these wacky gestures that you could do with multiple fingers on its big screen. But like most of those gestures, by giving you more expressiveness, they also give you more opportunities to hide interactions behind undiscoverable inputs, more things a user has to try in order to find out whether what they want to do is possible, more ambiguity about which input the user was really trying to do, &c. When what you really need in most cases is more concreteness. I think we should resist the temptation to add more layers of input, and move completely from one to the other.

I think I want a world where Force Touch simply replaces weak touch. In this world, weak touch does nothing. In this world, simply touching your device becomes a safe thing to do again. You can handle it like a normal person handling a normal object. You can point to something you’re showing to someone else. You can hold the device however is comfortable. When you really want to interact with something on the screen, you really press it! Getting your finger into position on the screen and then applying pressure would make you much less likely to miss. Everything would, I think, feel much more real, solid, and reliable.

Mayyybe weak touch should be allowed to highlight UI elements and help you know when you’ve acquired a target. Maybe a tiny bit of force feedback could help reinforce that. But that’s it! No hover-based mousey mystery-meat navigation like on the web. Enforcing this at the OS level, by providing an API that only adjusts the appearance of an element, may be the right way to keep designers from abusing it.

I could also imagine that inputs requiring rapid succesive touches, like keyboards, would still need to work with weak touches. But at least the keyboard is something you pull out when you need it and stow away when you don’t, thus avoiding the worst of the accidental input worry.