The 2008 Media Inflection: Meet Dr. Web, the New Gorilla

[edited 2009.01.15 to add link to DDJ’s announcement]

2008 was quite a year, full of landmark events that were certainly historic, if not always welcome.

If I had to pick one technology-related highlight from the past year, it would be this: A notable inflection point in the ongoing shift from traditional media to the web. Given that that tide is still in progress, why single out 2008? I think we’ll look back at 2008, especially the fourth quarter, as a turning point when the web became an A-list media outlet and first started to beat up, and even replace, major legacy competitors in newspapers, technical magazines, movies, and TV. An inflection point, if you will, where the web clearly stopped being the pencil-necked upstart, and visibly emerged as the new gorilla flexing its advertising-revenue-pumped-up muscles and kicking sand on the others.

In 2008, and particularly in the last month, the web began to outright replace some existing newspapers and technology and programming magazines.

  • December 2008: The first two major city newspapers go web-only or web-mostly. The Detroit News and The Detroit Free Press announced truncated print editions and reduced print delivery. Newspapers in several other cities seem likely to follow fairly soon.
  • As of January 2009: PC Magazine is going “digital only.” After the January issue, the print magazine will disappear. Instead, the content will appear exclusively on the web.
  • As of January 2009: Dr. Dobb’s Journal is permanently suspending print publication and going web-only. Some of the content will be available as a new “Dr. Dobb’s Report” section of InformationWeek. My Effective Concurrency column will continue, and I’ll continue to blog when new columns go live on the web, so if you’re reading the column via this blog nothing will significantly change for you.

As of next month, the only major technical programmer’s trade magazines still available in print, that I know of, are platform- and technology-specific ones like asp.netPRO and MSDN Magazine — and even those increasingly feature online-only content. For example, from MSDN Mag’s January 2009 editor’s note:

As we continue to grow our coverage to keep pace with the rapidly expanding set of technologies, we will often offer content exclusively online at So please check in frequently!

Gotta love RSS (and Atom etc.): Every text feed is like a magazine or newspaper column, every blogger a columnist. Every audio/video podcast feed is like a radio or TV series, or a radio station or TV channel. And our feed reader is the new magazine/newspaper, as we subscribe to columnists to make our personal custom newsmagazine. But RSS readers, along with RSS-consuming clients like iTunes, are more — they’re our personal selection, not only of the columns we want to read (on whatever topics we want, including the funnies section), but also of the media we want to hear and watch. It’s increasingly our way to choose the text, audio, and video we want all together. Who knew that a large chunk of the coming media convergence would come in the shape of RSS readers?

And other media are feeling the pressure from Dr. Web, the new gorilla:

  • This week (late December 2008), even cable operator Time Warner pushed web delivery for TV. Time Warner has had various contract disputes with Viacom and some local stations. But as part of the dispute:
      Time Warner will respond to Viacom’s advertisement, [Time Warner spokesman] Mr. Dudley said, by highlighting the availability of television content on the Internet.

      “We will be telling our customers exactly where they can go to see these programs online,” Mr. Dudley said. “We’ll also be telling them how they can hook up their PCs to a television set.”

  • During October-December 2008, Netflix’s Watch Instantly has started to turn into a juggernaut. It’s interesting enough that you can watch 12,000+ movies and TV shows streamed over the net to your PC at high quality. As of October 2008, you can get them on your Mac. As of November 2008, on your Xbox. As of December 2008, on your TiVo. As noted in one recent article:
      “It’s a good strategic move,” said Andy Hargreaves, an analyst with Pacific Crest Securities. “Netflix sees the world will go digital, no matter what they do. They realize there is more to be lost by waiting than doing it early.”

And I won’t even get started on SaaS: hosted rich GUI apps served up on the web, for example the December 2008 (again) announcement about Office Online.

Yes, don’t forget 2008, especially December 2008: The month the first major newspapers moved mostly to the web and abandoned print at least partly; the month that PC Magazine and Dr. Dobb’s suspended print publication and went web-only; the month Netflix Watch Instantly arrived in the living room on TiVos after hitting Xboxes a fortnight before; the month cable provider Time Warner threatened to tell people to watch TV on the net; and the month even Microsoft announced their intent to deliver significant Office web applications.

I for one welcome Dr. Web, our new Gorilla and media overlord!

Besides, what choice do I have?

TRS-80 vs. Alpha, and Parallel Optimization

Lest people get the wrong idea, I enjoy reading Jeff Atwood’s blog and agree with much of what he writes so entertainingly and provocatively. So far I’ve only responded when I strongly felt differently about something, which has been a grand total of twice now.

So let me also offer an example of something I wholeheartedly agree with. Yesterday, Jeff cited what is also my own favorite Programming Pearls figure:


Despite the enduring wonder of the yearly parade of newer, better hardware, we’d also do well to remember my all time favorite graph from Programming Pearls:


Everything is fast for small n.

Spot on. If you’re a professional programmer and haven’t read Programming Pearls yet, “run don’t walk” to your bookstore of choice.

Incidentally, just to tie this in to parallel computing as well, Jeff’s article also cites a nice graph of optimizations that improved NDepend:

Patrick Smacchia’s lessons learned from a real-world focus on performance is a great case study in optimization.

ndepend optimization graph

Patrick was able to improve nDepend analysis performance fourfold, and cut memory consumption in half. As predicted, most of this improvement was algorithmic in nature, but at least half of the overall improvement came from a variety of different optimization techniques.

As I’ve said many times, measure twice, optimize once: Know when and where to optimize. Profilers are your friend. As Patrick writes:

When it comes to enhancing performance there is only one way to do things right: measure and focus your work on the part of the code that really takes the bulk of time, not the one that you think takes the bulk of time.

And of course, in our ever-more-multicore world, the contribution of parallelization gain will continue to grow and dominate the optimization of CPU-bound code. But as Patrick also notes, realizing that gain is not always trivial:

While we get the 15% gain from between 1 and 2 processors, the gain is almost zero between 2 and 4 processors. We identified some potential IO contentions and memory issues that will require more attention in the future. This leads to another lesson: Don’t expect that scaling on many processors will be something easy, even if you don’t share states and don’t use synchronization.

Rich-GUI SaaS/Web 2.0 Apps Should Not Be Considered Harmful

Yesterday, the ever-popular Jeff Atwood (of Coding Horror fame) wrote an article [*] on how not to write Web 2.0 UIs. Unfortunately, it’s exactly backwards: What he identifies as a problem is in fact not only desirable, but necessary.

  • [*] Aside: Jeff, I know you love pictures, but is that particular gratuitous one really necessary? Yes, I know it’s CGI, but it made me really hesitate about linking to your post and has nothing to do with your technical point.

Jeff observes correctly that when you write an application to run on a platform like Windows and/or OS X, your application should follow the local look-and-feel. Fine so far. But he then repeats a claim that I believe is incorrect, at least today, and based on a fallacy — and adds another fallacy:

[Quoting Bill Higgins]

  • … a Windows application should look and feel like a Windows application, a Mac application should look and feel like a Mac application, and a web application should look and feel like a web application.

Bill extends this to web applications: a web app that apes the conventions of a desktop application is attempting to cross the uncanny valley of user interface design. This is a bad idea for all the same reasons; the tiny flaws and imperfections of the simulation will be grossly magnified for users.

  • When you build a “desktop in the web browser”-style application, you’re violating users’ unwritten expectations of how a web application should look and behave.

There are actually two fallacies here.

Fallacy #1: “Look and feel like a web application”

The first fallacy is here:

a web application should look and feel like a web application.

violating users’ unwritten expectations of how a web application should look and behave.

These assertions beg the question: What does a web application “look and feel like,” and what do users expect? Also, are you talking about Web 1.0, where there is an answer to these questions, or Web 2.0, where there isn’t?

For Web 1.0 applications, the answer is fairly easy: They look like hyperlinked documents built on technologies like HTML and CSS. That’s what people expect, and get.

But the examples Bill uses aren’t Web 1.0 applications, they’re Web 2.0 applications. For Web 2.0 applications, there are no widely accepted UI standards, and applications are all over the map. Indeed, the whole point of Ajax-y/Web2.0-y applications is to get beyond the current 1.0-standard functionality.

Not only are there are no widely-accepted UI standards, there aren’t even many widely-accepted UI technologies. Consider how many dissimilarities there are among just Flash, Silverlight, and JavaFX as these technologies compete for developer share. Then consider that even within any one of these technologies people actually build wildly diverse interfaces.

Here’s the main example these bloggers used:

Consider the Zimbra web-based email that Bill refers to.

zimbra email

It’s pretty obvious that their inspiration was Microsoft Outlook, a desktop application.

outlook email

But what’s wrong with Zimbra?

Here’s a Better Question #1: How could you do better?

And for bonus points, Still Better Question #2: What about OWA? Consider that Microsoft already provides essentially the same thing, with the same approach, in the form of Outlook Web Access, which looks remarkably like the usual Outlook [caveat: this is an example of why I write above and below that ‘most’ Web 2.0 apps don’t try to emulate a particular OS look-and-feel; this one does]. For example (a couple of sample shots taken from this brief video overview):

  • image

The rich UI isn’t a bug, it really is a feature — a killer feature we’re going to be seeing more of, not less of, because this is what delivering software-as-a-service (SaaS) is all about. Although I use the desktop Outlook most of the time, I like OWA and think it’s the best web-based email and calendaring I’ve seen especially when I’m away from my machine (and I’ve tried several others, though granted you do need to be using Exchange). I suspect its UI conventions are probably pretty accessible even to non-Windows users, though that’s probably debatable and your mileage may vary.

Fallacy #2: “A web app that apes the conventions of a desktop application is “

The second fallacy is Jeff’s comment (boldface original):

a web app that apes the conventions of a desktop application is attempting to cross the uncanny valley of user interface design.

I think this is flawed for three reasons.

First, the “uncanny valley” part makes the assumption that people will find it jarring that the app tries to look like a desktop application. (This concern is related to, but different from, the concern of fallacy #1.) But most such apps aren’t doing that at all, because they know their users will access them from PCs, Macs, and lots of different environments, and they have to look reasonable regardless of the user’s native environment. They’re usually not trying to duplicate a given platform’s native look and feel.

Second, what they are doing is borrowing from UI components and conventions that already work well in desktop GUI environments, and are common across many of those environments. When you have no standards for Web 2.0 look and feel, then doing the best you can by borrowing from ideas we already know work pretty well isn’t just okay, it’s necessary. What else can you do?

Finally, the worst part is this: The whole point of SaaS is to deliver desktop-like rich-GUI applications on the web. So what is being labeled ‘wrong’ above is the whole point of what we’re doing as an industry.

“SaaS/Web 2.0 on Web 1.0”: The new “GUI on DOS”

Most SaaS/Web 2.0 applications today look and feel pretty much the way GUI applications looked and felt like on DOS, before technologies like Windows and OS/2 PM existed. Around the late 1980s, people wrote lots of GUI applications that ran on DOS, but we didn’t have a widely-used common GUI infrastructure that handled basic windows and menus and events, much less standards like CUA that tried to say how to use such a common infrastructure if we had it. So they each did their own thing, borrowing where possible from what seemed to work well for GUIs on other platforms.

Twenty years ago, everyone writing GUIs on DOS designed the UIs as best they could, borrowing where possible from what they saw worked on platforms like the Macintosh and Xerox Alto and Star — but the results were all over the map, and would stay that way until a standard environment, followed by standard guidelines, came into being.

Today, everyone writing rich Web 2.0 applications is doing their own thing, borrowing as best they can from Macs and Windows and others — but the results are all over the map, and will continue to be until there actually is such a thing as a UI standard for rich-GUI web applications. You can see that in the differences between Zimbra and Outlook Web Access. In the meantime, it’s not just okay to borrow from what we’ve learned on the desktop; it’s necessary.

And the question isn’t whether metaphors users already understand on the desktop will migrate to the web, but which ones and how soon, because it’s the whole point of SaaS. The industry will soon be going well beyond Google Apps; with offerings like Office Online already announced for the short term, which puts still more rich-client GUI apps like word processors and spreadsheets in the browser (with functionality somewhere between Google Apps and the desktop version of Office).

Zimbra and Outlook Web Access aren’t examples of poor web app design, but exactly the opposite: They’re just the beginning of the next wave of rich web apps.

Effective Concurrency: Measuring Parallel Performance — Optimizing a Concurrent Queue

This month’s Effective Concurrency column is special — it turned into a feature-length article. (I don’t know whether it’ll officially be called a “feature” or a “column” in the print issue.) “Measuring Parallel Performance: Optimizing a Concurrent Queue” just went live on DDJ’s site, and will also appear in the print magazine.

From the article:

How would you write a fast, internally synchronized queue, one that callers can use without any explicit external locking or other synchronization? Let us count the ways…or four of them, at least, and compare their performance. We’ll start with a baseline program and then successively apply three optimization techniques, each time stopping to measure each change’s relative performance for queue items of different sizes to see how much each trick really bought us.

The goal of the article is to see how to measure and understand our code’s parallel performance and measure the actual effect of specific optimizations. Disclaimer: The goal of this article is not to write the fastest possible queue in the world (though it’s pretty good). I’ve already had plenty of email on recent queue-related columns from people who sent me their “faster” implementations; writing lock-free queues seems to be a popular indoor sport. Interestingly, for well of half of the ones I received, a 30-second glance at the code was enough to determine that the code had to be incorrect. Why? Because if it doesn’t do any synchronization on the shared variables — if there aren’t any locks, atomics, fences or other synchronization in the code — then it has races, which will manifest in practice even on forgiving platforms like x86/x64, and there’s no need to look further. (For more details, see the September 2008 column, Lock-Free Code: A False Sense of Security. Even some code submissions I received in response to that very article were broken for the same reasons shown in that article.)

I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns and the DDJ print magazine issue in which they first appeared:

The Pillars of Concurrency (Aug 2007)

How Much Scalability Do You Have or Need? (Sep 2007)

Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)

Apply Critical Sections Consistently (Nov 2007)

Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)

Use Lock Hierarchies to Avoid Deadlock (Jan 2008)

Break Amdahl’s Law! (Feb 2008)

Going Superlinear (Mar 2008)

Super Linearity and the Bigger Machine (Apr 2008)

Interrupt Politely (May 2008)

Maximize Locality, Minimize Contention (Jun 2008)

Choose Concurrency-Friendly Data Structures (Jul 2008)

The Many Faces of Deadlock (Aug 2008)

Lock-Free Code: A False Sense of Security (Sep 2008)

Writing Lock-Free Code: A Corrected Queue (Oct 2008)

Writing a Generalized Concurrent Queue (Nov 2008)

Understanding Parallel Performance (Dec 2008)

Measuring Parallel Performance: Optimizing a Concurrent Queue (Jan 2009)