Steve Jobs

Today our industry is much less than it was yesterday. We have lost one of the great innovators. Even more importantly, Steve Jobs’ family has lost a husband and brother and father, and our thoughts are with them.

What can be said that hasn’t been said? Steve has been arguably the single most influential driver and shaper of personal computing in every one of its five decades, from the 1970s to the 2010s. It’s obviously true for the 1970s (Apple, Apple ][) and 1980s (Mac). As for the 1990s, it should be enough that the Mac shaped essentially all of that decade’s desktop and notebook platforms, and icing on the cake that technologies pioneered at NeXT and Pixar so heavily influenced personal gaming and other personal computing. In the 2000s, suffice it to say that Steve put the personal  i  into modern computing and again transformed this industry, and other industries. Looking forward, absent some other world-changing event, it’s clear that the rest of the 2010s will see personal computing develop along the trail he and his teams have blazed already in this decade.

Here is a measure of a man’s impact: Imagine how different — how diminished — the world would be today if Steve had passed away ten years ago.

Makes our hearts fade a little, doesn’t it?

Now imagine how different — how much more — the world would be if Steve had lived another ten years.

Or another twenty. Or another fifty, as though what we have seen were but the first half of his life — and if the second half were not as a slowly aging, diminishing man, but with his health and strength and faculties as strong as ever for that much more time, a true fifty more years.

We are all cut down too soon.

Thanks, Steve.

2010: Cyberpunk World

Speaking as a neutral observer with exactly zero opinion on any political question, and not even a cyberpunk reader given that I’ve read about two such novels in my life: Is it just me, or do the last few months’ global news headlines read like they were ghostwritten by Neal Stephenson?

I wonder if we may look back on 2010 as the year it became widely understood that we now live in a cyberpunk world. Many of 2010’s top stories read like sci-fi:

  • Stuxnet: Sovereign nations (apparently) carry out successful attacks on each other with surgically crafted malware — viruses and worms that target specific nuclear facilities, possibly causing more damage and delay to their targets’ weapons programs than might have been achieved with a conventional military strike.
  • Wikileaks: Stateless ‘Net organizations operating outside national laws fight information battles with major governments, up to and including the strongest industrial and military superpowers. The governments react by applying political pressure to powerful multinational corporations to try to force the stateless organizations off the ‘Net and cut off their support and funding, but these efforts succeed only temporarily as the target keeps moving and reappearing.
  • Anonymous: Small vigilante groups of private cybergunners retaliate by (or just latch onto a handy excuse to go) carrying out global attacks on the websites of multinational corporations, inflicting enough damage on Visa and Mastercard to temporarily take them off the ‘Net, while being repelled by cyberfortresses like Amazon and Paypal that have stronger digital defenses. But before we get too confident about Amazon’s strength, remember that this definitely ain’t the biggest attack they’ll ever see, just a 21st-century-cyberwar hors d’oeuvre: Who were these global attackers? About 100 people, many of them teenagers.
  • Assange: Charismatic cyberpersonalities operating principally on the ‘Net live as permanent residents of no nation, and roam the world (until arrested) wherever they can jack in, amid calls for their arrest and/or assassination.
  • Kinect: Your benevolent (you hope) living room game console can see you. Insert obligatory Minority Report UIs no longer sci-fi line here, with optional reference to Nineteen Eighty-Four.
  • Other: Never mind that organized crime has for years now been well-known to be behind much of the phishing, spam, card skimming, and other electronic and ‘Net crimes. Not new to 2010, but seeing a significant uptick in the continued transition from boutique crime to serious organization and spear-phishing targeting specific high-profile organizations including the U.S. military.

Over the coming months and years, it will be interesting to see how multinational corporations and sovereign governments react to what some of them no doubt view as a new stateless — transnational? extranational? supernational? — and therefore global threat to their normal way of doing business.

The “You Call This Journalism?” Department

The Inquirer isn’t normally this silly, and it isn’t even April 1. Nick Farrell writes:

Why Apple might regret the Ipad [sic]

THE IPAD HAS DOOMED Apple, according to market anlaysts [sic] that are expecting the tablet to spell trouble for its maker. … Rather than killing off the netbook, the Ipad [sic] is harming sales of the Ipod [sic] and Macbooks… if the analysts are right the Ipad [sic] has killed the Ipod [sic] Touch.

This is just silly, for four reasons. Three are obvious:

  • The iPod Touch fits in your pocket and can be easily with you all the time. Nothing bigger can ever kill it, but only replace it for a subset of users who don’t need in-pocket portability. (Besides, even if all iPod Touch buyers bought an iPad instead, the latter is more expensive and so the correct term would be not “kill” but “upsell”.)
  • The laptop has a real keyboard and full applications. Nothing not full-featured can ever kill it, but only replace it for a subset of users who don’t need the richer experience and applications.
  • Even if it was killing the other business outright, which it isn’t, it’s always better to eat your own lunch than wait for a competitor to do it.

And the fourth reason it’s silly? Let’s be very clear: The iPad has sold 1 million units in its first 28 days. At $500-700 a pop, that means the iPad is becoming a new billion-dollar business in two months.

Nick, I don’t think “regret” is the word you’re looking for.

Links I enjoyed this week: Flash and HTML5

These are the two best links I’ve read in the wake of the Flash and HTML5 brouhaha(s). They discuss other informative points too, but their biggest value lies in discussing three things, to which I’ll offer the answers that make the most sense to me:

  • What is the web, really? “The web” is the cross-linked content, regardless of what in-browser/PC-based/phone-based generator/viewer/app is used to produce it and/or consume it.
  • Does web == in-browser? No. Native apps can be web apps just as much so as in-browser ones, and increasingly many native apps are web apps. Conversely, not everything that runs in a browser is part of the web, even though most of them are for the obvious historical reasons.
  • Is it necessary/desirable/possible to make in-browser apps be like native apps? No, maybe, and maybe. The jury is still out, but at the moment developers are still trying while some pundits keep decrying.

Here are the two articles.

Understand the Web (Ben Ward)

This rambly piece needs serious editing, but is nevertheless very informative. Much of the debate about Flash and/or HTML5 conflates two things: the web, and application development platforms. They aren’t the same thing, and in fact are mostly orthogonal. From the article:

Think about that word; ‘web’. Think about why it was so named. It’s nothing to do with rich applications. Everything about web architecture; HTTP, HTML, CSS, is designed to serve and render content, but most importantly the web is formed where all of that content is linked together. That is what makes it amazing, and that is what defines it.

… [in the confused Flash and HTML5 debates] We’re talking about two very different things: The web of information and content, and a desire for a free, cross-platform Cocoa or .NET quality application framework that runs in the browsers people already use.

On a different note, speaking of the desire for super-rich in-browser apps, he adds:

Personally, aside from all of this almost ideological disagreement over what the web is for, and what you can reasonably expect it to be good at, I honestly think that ‘Desktop-class Web Applications’ are a fools folly. Java, Flash, AIR and QT demonstrate right now that cross-platform applications are always inferior to the functionality and operation of the native framework on a host platform. Steve Jobs is right in his comments that third-party frameworks are an obstacle to native functionality.

HTML5 and the Web (Tim Bray)

Again, what “the web” is – and it has nothing specifically to do with HTML. From the article:

The Web is a tripod, depending critically on three architectural principles:

  • Pieces of the Web, which we call Resources, are identified by short strings of characters called “URIs”.

  • Work is accomplished by exchanging messages, which comprise metadata and representations of Resources.

  • The representations are expressed in a number of well-defined data formats; you can count on the message data to tell you which one is in use. It is essential that some of the representation formats be capable of containing URIs. The “Web” in WWW is that composed by the universe of Resources linked by the URIs in their representations.

That’s all. You notice that there’s nothing there that depends crucially on any flavor of HTML. Speaking only for myself, an increasingly large proportion of my Web experience arrives in the form of feed entries and Twitter posts; not HTML at all, but 100% part of the Web.

On Flash · This may be a side-trip, but anyhow: I entirely loathe Flash but by any definition it’s part of the Web. It works just fine as a resource representation and it can contain URI hyperlinks.

Native Applications · A large proportion of the native applications on iPhone, and on Android, and on Windows, and on Mac, and on Linux, are Web applications. They depend in a fundamental way on being able to recognize and make intelligent use of hyperlinks and traverse the great big wonderful Web.

… So whatever you may think of native applications, please don’t try to pretend that they are (or are not) necessarily good citizens of the Web. Being native (or not) has nothing to do with it.

Good stuff.

“Readability”

If you like reading just about anything on the web, including my articles, in a pretty nicely rendered plain format with no ads or other distractions, you might want to try out arc90’s Readability.

All you do is drag a bookmarklet to your bookmark bar, and then on any article-like web page you can click on the bookmarklet to turn this:

image

into this (with a few choices each for font, size, and margin):

image

This lets you gain a lot in readability when all you want to do is read the article itself with basic text and graphics rendered fairly nicely. You do lose a little formatting, such as colored text which I sometimes use in my articles’ code examples, but the overall effect is pretty nice.

I’ll keep trying Readability out, especially on smaller-than-desktop screens, to see if it’s a keeper. So far the overall effect is pretty nice. Thanks to James P. Hogan for the tip, even if the link his page gives is broken.

 

Note: If you’re using Mobile Safari (i.e., iPhone or iPad) you’ll need to do a little bit more work because that software doesn’t currently support dragging the bookmarklet to its bookmark bar. Fortunately, there’s a workaround:

  • Find the Javascript code. I just made the bookmarklet on a desktop browser and copied the code from there to an email to myself (some things are faster with a keyboard and mouse). Alternatively you can inspect the HTML using HTML Viewer right on the same device as Mobile Safari and cut-and-paste from that.
  • In Mobile Safari, make a new bookmarklet.
  • Edit it, and paste the Javascript code as the URL.

As has been true since the early Mac days in the 1980s, Apple products and SDKs make every piece of functionality either super easy if it’s supported, or super painful if it’s not. :-)

Pre-emptive snarky comment: Yes, I know some people will retort that Microsoft and Linux products are better, because at least they consistently make everything super painful all of the time… but I think that’s only half true.

Flash In the Pan

You’ve no doubt noticed the recent acceleration of the transition from Flash in favor of HTML5, thanks in large part to Apple’s refusal to support Flash on iPhone and iPad. First YouTube, and now TED, Vimeo, CBS, and Time and The New York Times are adding support for HTML5 in addition to, or instead of, Flash.

I’ve lately come to dislike Flash for personal reasons. Specifically, the Shockwave Player plugin is buggy and keeps hanging Chrome in particular and costing me a few minutes every few days while I wait for the “want to kill this plugin?” dialog to come up. It’s not the end of the world, just an annoyance. But it is annoying.

Aside: This seemed to start, or at least get a lot more frequent, around the time I installed Windows 7, which means we can have many interesting discussions about who broke whom. However, in searching around I see on Web forums there’s been a lot of chatter about Shockwave Player crashes on multiple browsers and OSes.

But given that the world is starting to move on anyway, and because I like to tinker, I wondered what “the Web without Flash” would look like on my PC. I already know what it looks like on my iPhone, but the PC gets heaver use and I generally use it to visit richer sites.

So yesterday I decided to take the plunge and uninstall Flash entirely. Flash used to be a de rigueur add-on. Now I’ll see what the Web is like without it.

So far, so good. My Machine Architecture talk on Google Video doesn’t play, but other than that the main difference is just that I see less distracting content and that I get a cute little bar at the top of many pages asking me to install a plugin to fully display the page; dismissing the bar only lasts for the duration of the page, but I find that the bar quickly fades into the background of consciousness.

We’ll see how long I can last sans Flash.

Effective Concurrency: Design for Manycore Systems

This month’s Effective Concurrency column, Design for Manycore Systems, is now live on DDJ’s website.

From the article:

Why worry about “manycore” today?

Dual- and quad-core computers are obviously here to stay for mainstream desktops and notebooks. But do we really need to think about "many-core" systems if we’re building a typical mainstream application right now? I find that, to many developers, "many-core" systems still feel fairly remote, and not an immediate issue to think about as they’re working on their current product.

This column is about why it’s time right now for most of us to think about systems with lots of cores. In short: Software is the (only) gating factor; as that gate falls, hardware parallelism is coming more and sooner than many people yet believe. …

I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns:

The Pillars of Concurrency (Aug 2007)

How Much Scalability Do You Have or Need? (Sep 2007)

Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)

Apply Critical Sections Consistently (Nov 2007)

Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)

Use Lock Hierarchies to Avoid Deadlock (Jan 2008)

Break Amdahl’s Law! (Feb 2008)

Going Superlinear (Mar 2008)

Super Linearity and the Bigger Machine (Apr 2008)

Interrupt Politely (May 2008)

Maximize Locality, Minimize Contention (Jun 2008)

Choose Concurrency-Friendly Data Structures (Jul 2008)

The Many Faces of Deadlock (Aug 2008)

Lock-Free Code: A False Sense of Security (Sep 2008)

Writing Lock-Free Code: A Corrected Queue (Oct 2008)

Writing a Generalized Concurrent Queue (Nov 2008)

Understanding Parallel Performance (Dec 2008)

Measuring Parallel Performance: Optimizing a Concurrent Queue (Jan 2009)

volatile vs. volatile (Feb 2009)

Sharing Is the Root of All Contention (Mar 2009)

Use Threads Correctly = Isolation + Asynchronous Messages (Apr 2009)

Use Thread Pools Correctly: Keep Tasks Short and Nonblocking (Apr 2009)

Eliminate False Sharing (May 2009)

Break Up and Interleave Work to Keep Threads Responsive (Jun 2009)

The Power of “In Progress” (Jul 2009)

Design for Manycore Systems (Aug 2009)

Income in Perspective: 2 Bppl @ $3/day

I just saw a CNN headline that read: “Young workers scrimp to live on $15/wk.” Before reading further, what do you think: Is that stunning and shocking? Or shockingly typical?

The story turned out to be a piece about white-collar workers in China trying to live frugally, spending only 100 Yuan on travel and food during the workweek to conserve funds. Of course, the workers’ actual total expenses and income are higher, because that $15/week figure doesn’t include weekend expenses and other major costs like rent. Even so, the story is considered newsworthy here, and is probably a shock to a number of readers in the western world.

But the headline wouldn’t surprise readers who are familiar with the approximate distribution of income/GDP/wealth in the world.

To illustrate, here are two personal data points from 2006, when my wife and I traveled to Kenya and Zambia to visit friends:

  • Income: In Kenya, we were told that being a staff worker at a safari lodge is considered a good job. What does it pay? About $2 per day, for long hours and six-day weeks. This isn’t unusual; in about 30 countries, including Kenya, more than half of the population earns under $2 per day. An estimated two billion people – 30% of the world’s population – live on an income of less than $3 per day. And $3 per day is about what the attention-grabbing CNN headline implies, though the actual story behind that headline is much less bad.
  • Cost of living: But what happens when we consider, not just dollar-for-dollar comparisons, but purchasing power? Isn’t it less expensive to live in less-developed countries? Yes, it usually is, especially for shelter and services – there’s been some talk lately on the U.S. news about retiring in Mexico as a way for older people to save money in this economy – but the difference for the same quality goods is often less than one might think. In Lusaka, the capital of Zambia, we found there were only three grocery stores [*] having similar goods to what we would expect to find in a U.S.-style Safeway, although of course the Zambian stores were much smaller than U.S. stores (more the size of a medium Trader Joe’s) and offered far less selection diversity (something like a factor of 20 fewer varieties or brands). When we visited one of the stores, I picked up a few Western-style items, totaled the price and converted to U.S. dollars in my head, and found that those comparable products in Lusaka cost nearly the same as we would have paid for the same items in Seattle. Our local friend replied: “Right, nobody who lives here would ever think of buying a can of soda pop.” Certainly not when a can of Coke costs a day’s wage for many people, and doesn’t confer any significant nutritional benefit.

The gulf between the western- and world-median standards of living is, simply put, vast – and growing. The standard of living that’s normal for most of the planet’s population is well nigh unimaginable to many of us in the western world, and even for those of us who’ve been there, it’s one thing to see it and quite another to really understand what such a life would be like. I don’t claim to.

 

[*] They might well be the only such stores in the country, not just the capital.

Answer to "16 Technologies": Engelbart and the Mother of All Demos

A few days ago I posted a challenge to name the researcher/team and approximate year each of the following 16 important technologies was first demonstrated. In brief, they were:

  • The personal computer for dedicated individual use all day long.
  • The mouse.
  • Internetworks.
  • Network service discovery.
  • Live collaboration and desktop/app sharing.
  • Hierarchical structure within a file system and within a document.
  • Cut/copy/paste, with drag-and-drop.
  • Paper metaphor for word processing.
  • Advanced pattern search and macro search.
  • Keyword search and multiple weighted keyword search.
  • Catalog-based information retrieval.
  • Flexible interactive formatting and line drawing.
  • Hyperlinks within a document and across documents.
  • Tagging graphics, and parts of graphics, as hyperlinks.
  • Shared workgroup document collaboration with annotations etc.
  • Live shared workgroup collaboration with live audio/video teleconference in a window.

A single answer to all of the above: Doug Engelbart and his ARC team, in what is now known as “The Mother of All Demos”, on Monday, December 9, 1968.

Last month, we marked the 40th anniversary of the famous Engelbart Demo, a truly unique “Eureka!” moment in the history of computing. 40 years go, Engelbart and his visionary team foresaw — and prototyped and demonstrated — many essential details of what we take for granted as our commonplace computing environment today, including all of the above-listed technologies, most of them demonstrated for the first time in that talk.

This talk would be noteworthy and historic just for being the first time a “mouse” was shown and called by that name. Yet the mouse was just one of over a dozen important innovations to be compellingly presented with working prototype implementations.

Note: Yes, some of the individual technologies have earlier theoretical roots. I deliberately phrased the question to focus on implementations because it’s great to imagine a new idea, but it isn’t engineering until we prove it can work by actually building it. For example, consider hypertext: Vannevar Bush’s Memex, vintage 1945, was a theorectical “proto-hypertext” system but it unfortunately remained theoretical, understandably so given the nascent state of computers at the time. Project Xanadu, started in 1960, pursued similar ideas but wasn’t demonstrated until 1972. The Engelbart Demo was the first time that hypertext was publicly shown in a working form, together with a slew of other important working innovations that combined to deliver an unprecedented tour de force. What made it compelling wasn’t just the individual ideas, but the working demonstrations to show that the ideas worked and how they could combine and interact in wonderful ways.

Recommended viewing

You can watch the 100-minute talk here (Stanford University) in sections with commentary, and here (Google Video) all in one go.