Jeff Atwood’s post two days ago inspired me to write this down. Thanks, Jeff.
“I can’t even remember the last time I was this excited about a computer.”
Our industry is young again, full of the bliss and sense of wonder and promise of adventure that comes with youth.
Computing feels young and fresh in a way that it hasn’t felt for years, and that has only happened to this degree at two other times in its history. Many old-timers, including myself, have said “this feels like 1980 again.”
It does indeed. And the reason why is all about user interfaces (UI).
Wave 1: Late 1950s through 60s
First, computing felt young in the late 1950s through the 60s because it was young, and made computers personally available to a select few people. Having computers at all was new, and the ability to make a machine do things opened up a whole new world for a band of pioneers like Dijkstra and Hoare, and Russell (Spacewar!) and Engelbart (Mother of All Demos) who made these computers personal for at least a few people.
The machines were useful. But the excitement came from personally interacting with the machine.
Wave 2: Late 1970s through 80s
Second, computing felt young again in the late 1970s and 80s. Then, truly personal single-user computers were new. They opened up to a far wider audience the sense of wonder that came with having a computer of our very own, and often even with a colorful graphical interface to draw us into its new worlds. I’ll include Woods and Crowther (ADVENT) as an example, because they used a PDP as a personal computer (smile) and their game and many more like it took off on the earliest true PCs – Exidy Sorcerers and TRS-80s, Ataris and Apples. This was the second and much bigger wave of delivering computers we could personally interact with.
The machines were somewhat useful; people kept trying to justify paying $1,000 for one “to organize recipes.” (Really.) But the real reason people wanted them was that they were more intimate – the excitement once again came from personally interacting with the machine.
Non-wave: 1990s through mid-2000s
Although WIMP interfaces proliferated in the 1990s and did deliver benefits and usability, they were never as exciting to the degree computers were in the 80s. Why not? Because they weren’t nearly as transformative in making computers more personal, more fun. And then, to add insult to injury, once we shipped WIMPiness throughout the industry, we called it good for a decade and innovation in user interfaces stagnated.
I heard many people wonder whether computing was done, whether this was all there would be. Thanks, Apple, for once again taking the lead in proving them wrong.
Wave 3: Late 2000s through the 10s
Now, starting in the late 2000s and through the 10s, modern mobile computers are new and more personal than ever, and they’re just getting started. But what makes them so much more personal? There are three components of the new age of computing, and they’re all about UI (user interfaces)… count ’em:
Now don’t get me wrong, these are in addition to keyboards and accurate pointing (mice, trackpads) and writing (pens), not instead of them. I don’t believe for a minute that keyboards and mice and pens are going away, because they’re incredibly useful – I agree with Joey Hess (HT to @codinghorror):
“If it doesn’t have a keyboard, I feel that my thoughts are being forced out through a straw.”
Nevertheless, touch, speech, and gestures are clearly important. Why? Because interacting with touch and speech and gestures is how we’re made, and that’s what lets these interactions power a new wave of making computers more personal. All three are coming to the mainstream in about that order…
… and all three aren’t done, they’re just getting started, and we can now see that at least the first two are inevitable. Consider:
Touchable screens on smartphones and tablets is just the beginning. Once we taste the ability to touch any screen, we immediately want and expect all screens to respond to touch. One year from now, when more people have had a taste of it, no one will question whether notebooks and monitors should respond to touch – though maybe a few will still question touch televisions. Two years from now, we’ll just assume that every screen should be touchable, and soon we’ll forget it was ever any other way. Anyone set on building non-touch mainstream screens of any size is on the wrong side of history.
Speech recognition on phones and in the living room is just the beginning. This week I recorded a podcast with Scott Hanselman which will air in another week or two, when Scott shared something he observed firsthand in his son: Once a child experiences saying “Xbox Pause,” he will expect all entertainment devices to respond to speech commands, and if they don’t they’re “broken.” Two years from now, speech will probably be the norm as one way to deliver primary commands. (Insert Scotty joke here.)
Likewise, gestures to control entertainment and games in the living room is just the beginning. Over the past year or two, when giving talks I’ve sometimes enjoyed messing with audiences by “changing” a PowerPoint slide by gesturing in the air in front of the screen while really changing the slide with the remote in my pocket. I immediately share the joke, of course, and we all have a laugh together, but the audience members more and more often just think it’s a new product and expect it to work. Gestures aren’t just for John Anderton any more.
Bringing touch and speech and gestures to all devices is a thrilling experience. They are just the beginning of the new wave that’s still growing. And this is the most personal wave so far.
This is an exciting and wonderful time to be part of our industry.
Computing is being reborn, again; we are young again.
16 thoughts on “Our industry is young again, and it’s all about UI”
I have to agree with the several comments criticizing touchscreens for their lack of tactile feedback. Perhaps I’m biased. I have worked in a few industries (aviation, naval ship control) that have convinced me touchscreens are a compromise (albiet a very good one). When working with the same device, doing the same thing over and over, when your attention should be devoted elsewhere, you don’t want to have to look at your finger to see where it is going. You want to be able to reach the control (button, knob, etc.), positively identify it, and manipulate it. You don’t look at the grip on your hammer while you’re driving a nail, do you?
Phones and tablets are perfect candidates for touchscreens for a couple of reasons. One, the real-estate is limited so doubling up the usefulness is a big plus. But the major reason touchscreens work is that these are general purpose devices — they can be programmed to do anything. But if you were running the same app, day in and day out (such as, I don’t know, dialing a phone), a dedicated key pad would win any day.
Well said. Indeed it is another exciting new era.
I was fortunate as a teenager to spend my 1974 college vacation working at an IBM research centre on algorithmic optimizations and visualizations. Mainframe with full screen editing terminals were hi-tech interactivity compared to the punched card and paper tape machines still commonplace in that era. One or two of us would stay back into the evening with the mainframe to ourselves madeking for my first personal computer experience right down to crashing and rebooting (IPL) the IBM 370 from front panel. 2Mb main memory was massive back then (when not shared among 30+ users). Returning for a spell 3 years later on a graphics and visualisation project, it was disappointing to find the mainframe room locked down but by then we had more memory, fancier terminals, trackballs, and fewer crashes. The Z80 and 6502 had arrived by then so personal computing not a total washout though small beer compared to using a mainframe PC. It took about 15 years to turn that 1974 vision into a flexible low cost personal computer with comaprable performance. Your ‘Phase 2’. Indeed it was exciting to be a part of making all that unfold. User interaction was the key driver through that era as you point out.
If 15 years seems like a long time, I’ll mention gesture input. A 1987 visit to a graphics chip manufacturer, spotted a camera linked to a PC in reception. By the time the guy I was meeting had arrived i’d convinced myself ‘gesture input’ was the next big thing. Back to base despite using every memory saving trick in the book and writing masses of assembler I didn’t get much further than a jerky point and click mouse emulation so convinced myself we needed a fair few years of Moores law and set it to one side. Although certainly didn’t expect to wait 25 years before programming the first low cost commercial device!
So for me touch, speech and gesture have been around forever, waiting for the silicon curve to catch up. And it finally has. I’m 100% with you on your ‘four predictions’. Indeed it makes for an exciting time for those of us who recall ‘Wave 1’ through to those like my 14 year old son who can look forward to being a part of Wave 4.
And yes, thats all about UI too!
From Joe Duffy’s Weblog; http://www.bluebytesoftware.com/blog/2012/10/31/BewareTheString.aspx
.NET holds an enormous advantage over C++.
That might sound like an oxymoron, but our system can in fact beat the pants off all the popular native programming environments. The key to success? Thought and discipline.
It’s actually a pretty crappy time in the industry right now, for a lot of (probably most) professional programmers. We are being micro-managed more and more, we have to abide by silly, pointless procedures imposed by management and business drones, and to top it all off our standard of living faces constant downward pressure with our livelihoods being shipped to India.
I keep seeing this trend people a pushing and, while I *like* it for some uses (phones, casual browsing, low bandwidth interactions like reading blogs and channel surfing), I don’t see it for anything of substance. Are people really going to sit there and work a 24″ touchscreen for 8 hours? Or how about sit around the office talking over your co-workers? Heck, my arms get tired just thinking about using a gesture interface as part of my work. I just don’t see it scaling.
Now I don’t think they are a passing fad, but if someone thinks that they will replace the keyboard/mouse as a *primary* interface for *all* uses, then I think that they will have abdicated a significant market segment to whoever wished to step into the gap.
The annoying thing here is that I strongly suspect that a good keyboard/mouse UI and a good touch UI are mutually exclusive. Trying to write a desktop manager or app that is good at both will fail at at least one, and possibly both.
love the insight about kids growing up with voice commands for the x-box thinking anything without voice commands would be “broken”. it makes me think about all the times I’ve been frustrated with a product for not being able to do what I felt it should be able to, since everything else like it can, and feeling like it was broken and just saying “well next time I’ll just buy from the competitor instead”.
This really could have farther reaching consequences into the industry than I thought before, giants could fall, and usurper could come out of nowhere. Things could look really different in a few years. That is exciting.
Am I the only one not convinced that the touch screen revolution is going to happen or be good if it does in a major way? I think “touch” or gesture in a lesser way may catch on and be useful if done properly but any major push (excuse the pun) in this way will simply lead to massive RSI problems and lawsuits. There is probably a significant amount of research still needed into the area ot reduce that. I myself will be not be looking to use touch too much because of this and it’s why I’m keen for mouse and keyboard to remain popular until there are more advances in other devices that reduce RSI, not increase it.
I loathe touchscreens. They have zero tactile feedback. They require the screen to be within an arm’s length. They smudge the beautiful text and graphics with oily fingerprints. Sometimes your touch is recognized, but sometimes it isn’t. And you can never tell if the screen missed your touch, or if the device is just being sluggish and unresponsive. Touchscreens are the worst thing for ergonomics since the laptop. (Remember how revolutionary the detached keyboard was?) With a touchscreen, you need the visual feedback because there is no tactile feedback. You can close your eyes and type on a physical keyboard and know when you’ve made a typo by feel. With a touchscreen all you have is the visual feedback, but it always seems your hand is in the way of that feedback. Touchscreens are a step backwards for accessibility. I honestly don’t get the slate/smartphone craze. I find the devices entirely unintuitive, unsatisfying, and usable.
I’m sure you’re right about touchscreens becoming the dominant form in a very short time. For me, though, that’s a huge negative. As a software engineer, I’m not excited; I’m hugely disappointed.
I really don’t get the excitement about touch screens. There are very few things we interact with in the real world via touch and awkward, unnatural multi-digit gestures…. In reality, we interact by grasping, and responding to subtle force feedback, and precise movements, and glances, and smells, and all sorts of things that dumb touch screens don’t even come close to.
Why ‘Touch’ TV? More like GESTURE TV. Were you can change channels with hand gestures, split the TV into 2 channels, Look at the Guide with one hand gesture and set the volume with the other hand. Make the TV itself rotate 90 degrees to watch the News in portrait mode and to landscape again to watch a movie in landscape mode, the possibilities are infinite.
Good insight. Holographic UIs that are projected from any mobile device is where we are headed IMHO which sort of moves touch out of the picture (pun intended). Whatever the device, whatever the visual representation, still needs some way to build these UIs into a useful and meaningful UX. Human interaction via touch, speech and/or gestures is not only the beginning, its starting to get old. As Tom Brander states above AI is important if that is coupled with “Speech understanding”. To be able to “listen” to a conversation and actually understand the semantics as opposed to recognizing the syntactic elements of what is being talked about, then act on that in a non-annoying and accurate way will be the way we interact with knowledge and entertainment “UIs” in the future.
I think AI is about as important as UI
@Brian: I was skeptical of big TV-sized touchscreens until the first time I touched one. Once you touch a screen, any screen, there’s no going back. If TVs were touchable, we wouldn’t use that capability to change channels — we’d use that big screen to do more new things like interactive games, anything you could do with a big electronic whiteboard on steroids, and lots more we can’t imagine yet.
For an example, I did a Q&A session on Channel 9 after my talk yesterday (it should air soon) where there was an 80″ touchscreen behind us, and it was incredibly useful simply to bring up the isocpp.org site and answer questions by just walking over and touching and pinch-zooming around the site super easily, way faster than with a keyboard and mouse, huge and easy to show, and very interactive.
Honest, no screen should be without touch. Once you try touch you’ll never want to go back. And you’ll leave fingerprints all over old non-touch screens… really, everyone I know has made that mistake many times already. You’ll be seeing lots more fingerprints on Macbooks… :)
Touch TV’s now there’s a thought! Imagine the great unwashed actually getting off the sofa to change channels ….
Comments are closed.