Feeds:
Posts
Comments

Archive for the ‘Friday Thoughts’ Category

Jeff Atwood’s post two days ago inspired me to write this down. Thanks, Jeff.

“I can’t even remember the last time I was this excited about a computer.”

Jeff Atwood, November 1, 2012


Our industry is young again, full of the bliss and sense of wonder and promise of adventure that comes with youth.

Computing feels young and fresh in a way that it hasn’t felt for years, and that has only happened to this degree at two other times in its history. Many old-timers, including myself, have said “this feels like 1980 again.”

It does indeed. And the reason why is all about user interfaces (UI).

Wave 1: Late 1950s through 60s

First, computing felt young in the late 1950s through the 60s because it was young, and made computers personally available to a select few people. Having computers at all was new, and the ability to make a machine do things opened up a whole new world for a band of pioneers like Dijkstra and Hoare, and Russell (Spacewar!) and Engelbart (Mother of All Demos) who made these computers personal for at least a few people.

The machines were useful. But the excitement came from personally interacting with the machine.

Wave 2: Late 1970s through 80s

Second, computing felt young again in the late 1970s and 80s. Then, truly personal single-user computers were new. They opened up to a far wider audience the sense of wonder that came with having a computer of our very own, and often even with a colorful graphical interface to draw us into its new worlds. I’ll include Woods and Crowther (ADVENT) as an example, because they used a PDP as a personal computer (smile) and their game and many more like it took off on the earliest true PCs – Exidy Sorcerers and TRS-80s, Ataris and Apples. This was the second and much bigger wave of delivering computers we could personally interact with.

The machines were somewhat useful; people kept trying to justify paying $1,000 for one “to organize recipes.” (Really.) But the real reason people wanted them was that they were more intimate – the excitement once again came from personally interacting with the machine.

Non-wave: 1990s through mid-2000s

Although WIMP interfaces proliferated in the 1990s and did deliver benefits and usability, they were never as exciting to the degree computers were in the 80s. Why not? Because they weren’t nearly as transformative in making computers more personal, more fun. And then, to add insult to injury, once we shipped WIMPiness throughout the industry, we called it good for a decade and innovation in user interfaces stagnated.

I heard many people wonder whether computing was done, whether this was all there would be. Thanks, Apple, for once again taking the lead in proving them wrong.

Wave 3: Late 2000s through the 10s

Now, starting in the late 2000s and through the 10s, modern mobile computers are new and more personal than ever, and they’re just getting started. But what makes them so much more personal? There are three components of the new age of computing, and they’re all about UI (user interfaces)… count ’em:

  1. Touch.
  2. Speech.
  3. Gestures.

Now don’t get me wrong, these are in addition to keyboards and accurate pointing (mice, trackpads) and writing (pens), not instead of them. I don’t believe for a minute that keyboards and mice and pens are going away, because they’re incredibly useful – I agree with Joey Hess (HT to @codinghorror):

“If it doesn’t have a keyboard, I feel that my thoughts are being forced out through a straw.”

Nevertheless, touch, speech, and gestures are clearly important. Why? Because interacting with touch and speech and gestures is how we’re made, and that’s what lets these interactions power a new wave of making computers more personal. All three are coming to the mainstream in about that order…

Four predictions

… and all three aren’t done, they’re just getting started, and we can now see that at least the first two are inevitable. Consider:

Touchable screens on smartphones and tablets is just the beginning. Once we taste the ability to touch any screen, we immediately want and expect all screens to respond to touch. One year from now, when more people have had a taste of it, no one will question whether notebooks and monitors should respond to touch – though maybe a few will still question touch televisions. Two years from now, we’ll just assume that every screen should be touchable, and soon we’ll forget it was ever any other way. Anyone set on building non-touch mainstream screens of any size is on the wrong side of history.

Speech recognition on phones and in the living room is just the beginning. This week I recorded a podcast with Scott Hanselman which will air in another week or two, when Scott shared something he observed firsthand in his son: Once a child experiences saying “Xbox Pause,” he will expect all entertainment devices to respond to speech commands, and if they don’t they’re “broken.” Two years from now, speech will probably be the norm as one way to deliver primary commands. (Insert Scotty joke here.)

Likewise, gestures to control entertainment and games in the living room is just the beginning. Over the past year or two, when giving talks I’ve sometimes enjoyed messing with audiences by “changing” a PowerPoint slide by gesturing in the air in front of the screen while really changing the slide with the remote in my pocket. I immediately share the joke, of course, and we all have a laugh together, but the audience members more and more often just think it’s a new product and expect it to work. Gestures aren’t just for John Anderton any more.

Bringing touch and speech and gestures to all devices is a thrilling experience. They are just the beginning of the new wave that’s still growing. And this is the most personal wave so far.

This is an exciting and wonderful time to be part of our industry.

Computing is being reborn, again; we are young again.

Read Full Post »

Earlier this week, Brent Schlender published selected Steve Jobs quote highlights from his interview tape archives.

Here’s one about us:

The difference between the best worker on computer hardware and the average may be 2 to 1, if you’re lucky. With automobiles, maybe 2 to 1. But in software, it’s at least 25 to 1. The difference between the average programmer and a great one is at least that.

This illustrates that there’s always lots of headroom to keep growing as a developer. We should always keep learning, and strive to become ever stronger at our craft.

You might also enjoy the history and observant commentary in Schlender’s other new article The Lost Steve Jobs Tapes, which focuses on “the wilderness years.”

Read Full Post »

Nicely put… Christian Lindholm:

Most companies (including web startups), he said, are looking to “wow” with their products, when in reality what they should be looking for is an “‘of course’ reaction from their users.”

Simple and obvious beats flashy. So many great designs are obvious in retrospect.

Hat tip to John Gruber.

Read Full Post »

image
I don’t normally blog poetry, but the passing of our giants this past month has put me in such a mood.

.

What is built becomes our future
Hand-constructed, stone by stone
Quarried by our elders’ labors
Fashioned with their strength and bone
Dare to dream, and dare to conquer
Fears by building castles grand
But ne’er forget, and e’er remember
To take a new step we must stand
On the shoulders of our giants
Who, seeing off into the morrow,
Made the dreams of past turn truth –
How their passing is our sorrow.

Read Full Post »

Speaking as a neutral observer with exactly zero opinion on any political question, and not even a cyberpunk reader given that I’ve read about two such novels in my life: Is it just me, or do the last few months’ global news headlines read like they were ghostwritten by Neal Stephenson?

I wonder if we may look back on 2010 as the year it became widely understood that we now live in a cyberpunk world. Many of 2010′s top stories read like sci-fi:

  • Stuxnet: Sovereign nations (apparently) carry out successful attacks on each other with surgically crafted malware — viruses and worms that target specific nuclear facilities, possibly causing more damage and delay to their targets’ weapons programs than might have been achieved with a conventional military strike.
  • Wikileaks: Stateless ‘Net organizations operating outside national laws fight information battles with major governments, up to and including the strongest industrial and military superpowers. The governments react by applying political pressure to powerful multinational corporations to try to force the stateless organizations off the ‘Net and cut off their support and funding, but these efforts succeed only temporarily as the target keeps moving and reappearing.
  • Anonymous: Small vigilante groups of private cybergunners retaliate by (or just latch onto a handy excuse to go) carrying out global attacks on the websites of multinational corporations, inflicting enough damage on Visa and Mastercard to temporarily take them off the ‘Net, while being repelled by cyberfortresses like Amazon and Paypal that have stronger digital defenses. But before we get too confident about Amazon’s strength, remember that this definitely ain’t the biggest attack they’ll ever see, just a 21st-century-cyberwar hors d’oeuvre: Who were these global attackers? About 100 people, many of them teenagers.
  • Assange: Charismatic cyberpersonalities operating principally on the ‘Net live as permanent residents of no nation, and roam the world (until arrested) wherever they can jack in, amid calls for their arrest and/or assassination.
  • Kinect: Your benevolent (you hope) living room game console can see you. Insert obligatory Minority Report UIs no longer sci-fi line here, with optional reference to Nineteen Eighty-Four.
  • Other: Never mind that organized crime has for years now been well-known to be behind much of the phishing, spam, card skimming, and other electronic and ‘Net crimes. Not new to 2010, but seeing a significant uptick in the continued transition from boutique crime to serious organization and spear-phishing targeting specific high-profile organizations including the U.S. military.

Over the coming months and years, it will be interesting to see how multinational corporations and sovereign governments react to what some of them no doubt view as a new stateless — transnational? extranational? supernational? — and therefore global threat to their normal way of doing business.

Read Full Post »

These are the two best links I’ve read in the wake of the Flash and HTML5 brouhaha(s). They discuss other informative points too, but their biggest value lies in discussing three things, to which I’ll offer the answers that make the most sense to me:

  • What is the web, really? “The web” is the cross-linked content, regardless of what in-browser/PC-based/phone-based generator/viewer/app is used to produce it and/or consume it.
  • Does web == in-browser? No. Native apps can be web apps just as much so as in-browser ones, and increasingly many native apps are web apps. Conversely, not everything that runs in a browser is part of the web, even though most of them are for the obvious historical reasons.
  • Is it necessary/desirable/possible to make in-browser apps be like native apps? No, maybe, and maybe. The jury is still out, but at the moment developers are still trying while some pundits keep decrying.

Here are the two articles.

Understand the Web (Ben Ward)

This rambly piece needs serious editing, but is nevertheless very informative. Much of the debate about Flash and/or HTML5 conflates two things: the web, and application development platforms. They aren’t the same thing, and in fact are mostly orthogonal. From the article:

Think about that word; ‘web’. Think about why it was so named. It’s nothing to do with rich applications. Everything about web architecture; HTTP, HTML, CSS, is designed to serve and render content, but most importantly the web is formed where all of that content is linked together. That is what makes it amazing, and that is what defines it.

… [in the confused Flash and HTML5 debates] We’re talking about two very different things: The web of information and content, and a desire for a free, cross-platform Cocoa or .NET quality application framework that runs in the browsers people already use.

On a different note, speaking of the desire for super-rich in-browser apps, he adds:

Personally, aside from all of this almost ideological disagreement over what the web is for, and what you can reasonably expect it to be good at, I honestly think that ‘Desktop-class Web Applications’ are a fools folly. Java, Flash, AIR and QT demonstrate right now that cross-platform applications are always inferior to the functionality and operation of the native framework on a host platform. Steve Jobs is right in his comments that third-party frameworks are an obstacle to native functionality.

HTML5 and the Web (Tim Bray)

Again, what “the web” is – and it has nothing specifically to do with HTML. From the article:

The Web is a tripod, depending critically on three architectural principles:

  • Pieces of the Web, which we call Resources, are identified by short strings of characters called “URIs”.

  • Work is accomplished by exchanging messages, which comprise metadata and representations of Resources.

  • The representations are expressed in a number of well-defined data formats; you can count on the message data to tell you which one is in use. It is essential that some of the representation formats be capable of containing URIs. The “Web” in WWW is that composed by the universe of Resources linked by the URIs in their representations.

That’s all. You notice that there’s nothing there that depends crucially on any flavor of HTML. Speaking only for myself, an increasingly large proportion of my Web experience arrives in the form of feed entries and Twitter posts; not HTML at all, but 100% part of the Web.

On Flash · This may be a side-trip, but anyhow: I entirely loathe Flash but by any definition it’s part of the Web. It works just fine as a resource representation and it can contain URI hyperlinks.

Native Applications · A large proportion of the native applications on iPhone, and on Android, and on Windows, and on Mac, and on Linux, are Web applications. They depend in a fundamental way on being able to recognize and make intelligent use of hyperlinks and traverse the great big wonderful Web.

… So whatever you may think of native applications, please don’t try to pretend that they are (or are not) necessarily good citizens of the Web. Being native (or not) has nothing to do with it.

Good stuff.

Read Full Post »

Appetizers: Three cool links

The Design of Design by Fred Brooks (Amazon)
Yes, a new book by the Fred Brooks. Started reading it in Stanza on my iPhone today…

A Turing Machine (aturingmachine.com)
I’m in love. This is my favorite computer ever. I so want one.

The Evolution of Visual C++ in Visual Studio 2010 (VS Magazine)
A summary of what’s new in VC++ 2010, from the C++0x language and library features, to concurrency runtime and libraries, to faster and more accurate Intellisense (running the EDG engine), and more. All I can say is that VS 2010 is available imminently…

Entree: My favorite link this week

What the iPad Really Is (Michael Swaine, Dr. Dobb’s)
Swaine gets it. The iPad is a “read-mostly” and “anywhere” device.

That’s why Steve Jobs is correct that this segment between notebooks and phones exists, and that serving that segment expands the market rather than competing directly with either neighboring segment. The tablet, spelled with “i” or otherwise, mostly doesn’t compete with desktops and notebooks (except for users who only do read-mostly stuff) or smartphones (except for users who need a bigger screen); it complements both. I’ve been using Windows convertible tablets off and on for years for this part of my computing life.

For my tablet needs, the iPad as launched had only two disappointments. The killer piece of missing software was a OneNote equivalent, and the killer piece of missing hardware was a stylus – really, because I want to finally have a real paper-notebook replacement. My convertible tablet/notebook has these covered, but maybe if a dedicated tablet can match this part too it can take over the “tablet segment” for me and I can go back to a notebook that’s a dedicated notebook. We’ll see.

Incidentally, now with Netflix (if it’s not a 4/1 joke) and Hulu and Flickr and ABC joining the burgeoning flood, it looks more and more like “iPad, iPad everywhere”

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 2,206 other followers