I often get a bit annoyed at my computer’s user interface. Why, after all these years, are we still interacting with our computers with a primitive keyboard, a simplistic pointing device and a flat glowing rectangle of a 2D screen with a limited size? This kind of user interface has been around for decades and feels like such a bottleneck. Steve Jobs once called the personal computer a “bicycle for the mind“. Why can’t we have a motorcycle for the mind? Or race car? Or helicopter? Where is the real innovation in user interfaces?
Of course there have been plenty of attempts at dramatically improving the way in which we interact with computers, and most have gone wrong so far. Several generations of virtual reality gear have failed to catch on with consumers. Augmented reality is still not much more than a novelty. Voice recognition is nice and increasingly ubiquitous, but very limited in its use cases. And despite the best efforts of Elon Musk and others, we are presumably still very far away from a direct brain-to-computer interface.
But when you really think about it, we have actually experienced a revolution in user interfaces over the last few decades, along with an incredible proliferation of underlying software. There is an increasingly large number of computing devices (in the broadest sense) that surround us, and all of them have user interfaces. But interestingly, despite using these UIs constantly, we barely think about them.
When I was born in 1971, our household didn’t have a single microchip in it. Devices were mechanical and electromechanical. The first microchip was probably a pocket calculator that we got when I was about 8. A VCR, presumably powered by a simple microchip, followed a few years later. And when I was 12, I got my first real home computer, the first device in our home that was truly programmable and could extend its functionality through software.
I just counted the number of devices in my household that are powered by some kind of microchip and run some kind of software.
I arrived at about 130, not counting devices such as battery packs, chargers and remote controls that presumably use a simple microcontroller.
Seven pretty lightbulb computers and the light switch computer that controls them
130 computers (and simple software-controlled, computing-capable devices) in a single household, and I probably still missed quite a few. OK, we are a gadget-friendly family, but I would be surprised if most four-person households had far fewer than a third or so of that number, once you really start counting.
Where are all these microchips with their often hidden software and their specialized user interfaces? They are in:
- Smartphones and tablets
- Smart home assistants
- Smartwatches and fitness trackers
- Printers and scanners
- Wifi hotspots, cable modems and other connectivity stuff
- Modern household appliances
- Digital light switches and the connected lightbulbs they control
- Smart TVs and monitors
- Cars, e-bikes and e-scooters
- Connected music players, sound bars, stereo receivers, music keyboards
- Connected headphones
- Digital cameras, both standalone and connected
- Digital toys for the kids
Of course this proliferation has brought some unwanted complexity. I’m still a bit flabbergasted by the fact that some of my lightbulbs now need periodic software updates. Yes, lightbulbs. And interacting with any user interface has to be learned. While many vendors by now do a reasonably good job sticking to familiar patterns (what people then call “intuitive” despite it of course being an acquired skill), some still fail. I still don’t know how to set the timer on my digitized oven without consulting the manual. And of course there are increasing security concerns, since many of these devices are now connected to the Internet and often suffer from very lackluster security measures.
It is amazing how much computing power surrounds us without us really noticing it. Computerized devices have taken over and (hopefully) improved many daily tasks. By now it’s hard to point to any waking hour when we are not interacting with a user interface and the software behind it. It is worth to occasionally reflect on this impressive pace of technological progress.
And yet: Somehow I can’t help but feel dissatisfied about the improvements in user interfaces. What’s that old joke about Silicon Valley innovation? What we wanted was flying cars, and what we got were 140 characters and photo sharing. You could say the same about user interfaces: What we wanted was an immersive, high-bandwidth way to interact with computers to enhance the way we think. What we got were programmable light switches.
That’s not to take away from the amazing work that UI designers of all stripes are doing. But it increasingly feels like we are approaching a time when a true quantum leap in user interfaces is possible and necessary. We have largely solved simple UIs for mundane daily tasks and traditional computing. Let’s hope that the next wave of innovation will change how we can use our computers to do real work and think much better.
Where is this innovation going to come from? Of course all the established tech giants like Apple, Google and Facebook have teams that are doing research on these topics. But it remains to be seen if they have the patience to create something truly new. There is a lot of research in academia as well, and occasionally a startup tries to commercialize the results, but it’s still tough to get traction with isolated new concepts (like social robots, for example). Very likely we’ll see a new generation of startups that create these future devices out of humble experimental beginnings, much like Microsoft and Apple, which were part of the first batch of startups that tried to make the computer truly personal.