Microcosmographia by William Van Hecke

How User Interfaces Are Still Failing Us, part ii

Microcosmographia xix: How User Interfaces Are Still Failing Us, part ii

Microcosmographia is a newsletter thing about honestly trying to understand design and humanity.

Time for another one of this series. It seemed appropriate, since I have been thinking about 3D Touch and the Apple Pencil and the other interactions introduced this morning.

I am all in favor of 3D Touch, which at long last adds some nuance to the Pictures Under Glass experience of Multi-Touch devices. To be honest I was kind of hoping that when we got pressure sensitivity, we would have the restraint to use it to make the interactions we already had more reliable and forgiving, rather than adding a whole new layer of invisible interactions on top of it all. Of course I have not used 3D Touch yet, but I am a bit nervous that like edge-swipes and two-finger taps and long presses and double-taps, it’ll be yet another class of secret inputs that people accidentally do when they didn’t mean to, or have trouble doing when they do mean to.

It also kinda seems that while Apple boasts of using “depth” as both a visual design principle and an interaction design principle, it’s not clear how the two relate. Some stuff appears to be in front of other stuff, blurrily or parallaxily, sometimes! Pushing a thing away from you harder makes it… spring toward you and open up? Pushing even harder than that makes it… open… more!!

Again, that’s probably not fair — I still need to try it for myself. But the actual thing I wanted to write about today is something that goes hand in hand with pressure sensitivity. It’s…


If pressure sensitivity is the introduction of depth to inputs, tactility is its output counterpart. How strange is it for our human fingers to be interacting with objects all day that visually move and react with such high fidelity, but which feel like nothing? What would it mean for a digital experience to be satisfyingly tactile?

I have a long-held dream of a “computing” experience that’s literally a big heavy wooden desk or careworn satchel full of stuff. It contains hardcover copies of every book ever written, traditional drafting and writing tools that behave better than the real thing, fantastical artifacts and implements that do things no traditional tool ever could have… All of that and more, with all the magic and convenience that we have come to expect from digital networked technology. This is going to remain a dream for a very long time, probably until we have nanotech that can literally construct the tools for you just as you retrieve them from the drawer or the bag, and transmute them into the next set of tools when you’re done.

Until we have that indistinguishable-from-magic, Bag of Holding sort of experience, what is a realistic next step for tactility? My favorite right now is raised pixels. Imagine a 1-bit screen buffer that you could “draw” onto just like the display. At, say, half the pixel density. When it’s filled with “black”, the screen is flat. But pixels you draw with “white” become raised. Just a tiny, tiny bit — a fraction of the pixel width is probably enough. I thought the mechanism for this would have to be actually physically lifting a tiny cell up from the surface of the display, but it sounds like some smart folks are figuring out how to do it with electromagnetic signals instead.

With this one bit of precision, you could:

I also just really enjoy the 1-bit texture idea because it brings back memories of the desktop pattern editor on the original Mac. ^__^

Thank You and Be Well

Trying to restore my iPhone, which totally died in the process of trying to put the iOS 9 GM seed on there. Fneh! So that’s all for today.