Jonty Sharples, Design Director at creative innovation agency Albion, thinks that we’re getting bored with touch, but gesture, which is the long-term successor, isn’t ready yet. So we’re going to have voice foisted on us in the meantime – and it’s going to be really, really annoying.
Touch-screens are currently dominating the consumer experience, from smartphones, to tablets, and kiosks aiding / hindering people in public places. ‘Pressing buttons under glass’ is currently the de facto way of controlling any digital system, even to the extent of it cropping up in inappropriate places – surely the physics of a touchscreen laptop are all wrong? (I’m looking at you Microsoft.)
But designers and developers are now on the hunt for the next thing. They’re bored with touch, already finding restrictive what was so liberating but a couple of years ago. Haptics are coming, but I’m not sure they’ll see the same kind of adoption rate as touch. No, the next interface they’re looking to is voice-control. Unfortunately, this I think, is a short-term solution, inappropriate in all but a few, very select circumstances.
Imagine the following scenario: You’re on the train, sat next to one of those people. You know the ones –people still have the keyboard ‘click’ sound activated on their smartphone. Who, for whatever reason, have also failed to turn off the email-sending ‘whoosh’. And who, at this very moment, are conducting their third tedious, vocal, opinionated phone call. Not a pleasant thought. Now imagine how much worse it will get when many of their inputs that today are made using a silent touch, are made by talking to their device. Urgh.
I’m being over-dramatic but, if you were lucky enough to catch Google’s original Project Glass vision piece, you’ll have noticed the distinct lack of human interaction throughout the day. Yes, problems appear elegantly solved by Glass, but this is the story of one character. Envisage a third of the population mumbling into chunky eyewear whilst actively ignoring one another and you’ll begin to see why I’m sceptical.
The world needs fewer bubbles. You only have to step on to public transport to see how insular we all are. I’m currently typing this on an iPad, on a train journey, headphones muting the whooshes and clicks and inane chatter. The guy opposite is playing Angry Birds, the lady next to him is gently ladysnoring, and the chap to my right to me is watching a pirate copy of the latest Bourne movie on his super-size Android device. Even in the bookstore, ‘Project Glass Dude’ fails to ask a single person anything – it’s a glossy dystopian future.
In her most recent book, Sherry Turkle addresses the topic of us being ‘alone together’; seemingly surrounded by ‘virtual’ (a filthy word) friends and acquaintances, but really isolated in our own little anti-social bubbles, awaiting the next pop-up to let us know someone has noticed our most recent nugget of narcissism.
Siri and, more recently, the successful integration of Voice Search in Google products, mean we’re already standing right next to ‘Project Glass Dude’. Further iterations of these technologies will embed themselves in our lives. Some for good, and some for ill.
Of course etiquette will be established, but there will always be those who are happy to flaunt these conventions; the kids who live stream their lives are the ones who will be setting the bar for what’s acceptable. Apparently it’s just fine for people to pay only ‘continuous partial attention’ to meaningful real-world interactions, in order to prod at their glazed electronics. When they’re talking to their devices instead, how will we know what’s intended for us and what’s intended for the virtual assistant?
Voice is a great interface in certain situations. In the car. In a hospital operating theatre. I’m sure there are (a few) more. But making it the new default, because designers and developers are bored with touch, seems perverse and detrimental. Can we please just think this through a little bit?
I’m not arguing for the continued hegemony of touch. Goodness only knows, touch can be tiring. In Microsoft’s vision film, made back in 2011, it’s all wafer thin glass devices, touch and reactive surfaces. Every interaction seems to have a conspicuous reaction, something I’m fairly sure we’re all getting more than a little tired of. It’s exhausting being pestered by bouncing this, and wobbling that. Blame the movies, everything beeps and jiggles in Tony Stark’s garage.
But, for me, the next important interaction likely to enter our lives will be gestural. Its roots will be in touch, with many of the affordances an established paradign furnishes us with; design patterns, established mental models and management of expectation. As with gestural touch, clearly it will have and has its faults, but I believe that with the right approach it can be made to work efficiently and unobtrusively for a high proportion of the population.
Of course it’s been around for few years now in the videogames world. When I was first introduced to the Microsoft Kinect dev kit (at the time an unassuming box with a couple of cameras inside), I was awestruck. Now, as with all things tech, we’re reducing the size and increasing the power and accuracy. The Leap Motion Controller is one of the latest developments in this area. An object around the same size as a flash drive but with a range of 8ft cubic feet, and the ability to track finger gestures to within 1/100th of a mm. Minority Report has arrived. You can be Iron Man (without the bleeps). Caveat: it hasn’t shipped yet.
I believe that the era of gesture control that will be enabled by the Leap tech, and what comes after it, will yield more subtle, useful and expressive interactions. When John Underkoffler was designing the interface for Minority Report, Steven Spielberg told him it should look like the characters were “conducting an orchestra”. Now that’s a beautiful vision. The gestural control of the future should not be tiring or bothersome, but elegant and accurate.