16 Important Technologies: Who demonstrated each one first?

We enjoy such an abundance of computing riches that it’s easy to take wonderful technological ideas for granted. Yet so many of the pieces of our modern computing experience that we consider routine today were at one time unimaginable. After all, back in the early days of computing, we were still discovering what these newfangled room-filling gadgets might eventually become capable of — who could have known then what using computers would be like today?

Of course, we have these technologies today because some visionaries did know, did imagine them… and, best of all, built and demonstrated them.

Hence today’s challenge:

Quiz: For each of the following 16 technologies that have become commonplace in our modern computing experience, give the researcher/team and approximate year that a working prototype was first demonstrated. How many can you answer without a web search?

  • The personal computer for dedicated individual use, that one person can have at their disposal all day long. (Hint: Before the Altair in 1975 and Apple I in 1976.)
  • Mouse input with a graphical pointer. (Hint: Before the Xerox Alto at Xerox PARC in 1973.)
  • Internetworks across campuses and cities. (Hint: Before Ethernet at Xerox PARC (again) in 1973.)
  • Discovery of ‘who’s got what service’ in an internetwork.
  • Using internetworks for live collaboration, not just file sharing. (Hint: Before RDP and others.)
  • Hierarchical structure within a file system and within a document. (Hint: Before Unix.)
  • Cut/copy/paste, with drag-and-drop.
  • Paper metaphor for word processing, starting with a blank piece of paper and the applying  formatting and navigating levels in the structure of text.
  • Advanced pattern search and macro search within documents. (Hint: Before MIT’s Emacs.)
  • Keyword search and multiple weighted keyword search. (Hint: Long before Google (alternate link).)
  • Information retrieval through indirect construction of a catalog.
  • Flexible interactive formatting and line drawing.
  • Hyperlinks within a document and across documents, and “jumping on a link” to navigate. (Hint: Before Tim Berners-Lee invented the World Wide Web in 1989-1990.) (Hint’: Yes, before HyperCard too.)
  • Tagging graphics, and parts of graphics, as hyperlinks. (Hint: Before Flickr.)
  • Workgroup collaboration on a document, including collaborative annotations, allowing members of a group to use and modify a document. (Hint: Before Lotus Notes and Ward Cunningham’s Wikis.)
  • The next step up from that: Live collaboration on a document with screen sharing on the two writers’ computers so they can see what the other is doing — with live audio/video teleconference in a window at the same time. (Hint: Not Skype or LiveMeeting.)

The 2008 Media Inflection: Meet Dr. Web, the New Gorilla

[edited 2009.01.15 to add link to DDJ’s announcement]

2008 was quite a year, full of landmark events that were certainly historic, if not always welcome.

If I had to pick one technology-related highlight from the past year, it would be this: A notable inflection point in the ongoing shift from traditional media to the web. Given that that tide is still in progress, why single out 2008? I think we’ll look back at 2008, especially the fourth quarter, as a turning point when the web became an A-list media outlet and first started to beat up, and even replace, major legacy competitors in newspapers, technical magazines, movies, and TV. An inflection point, if you will, where the web clearly stopped being the pencil-necked upstart, and visibly emerged as the new gorilla flexing its advertising-revenue-pumped-up muscles and kicking sand on the others.

In 2008, and particularly in the last month, the web began to outright replace some existing newspapers and technology and programming magazines.

  • December 2008: The first two major city newspapers go web-only or web-mostly. The Detroit News and The Detroit Free Press announced truncated print editions and reduced print delivery. Newspapers in several other cities seem likely to follow fairly soon.
  • As of January 2009: PC Magazine is going “digital only.” After the January issue, the print magazine will disappear. Instead, the content will appear exclusively on the web.
  • As of January 2009: Dr. Dobb’s Journal is permanently suspending print publication and going web-only. Some of the content will be available as a new “Dr. Dobb’s Report” section of InformationWeek. My Effective Concurrency column will continue, and I’ll continue to blog when new columns go live on the web, so if you’re reading the column via this blog nothing will significantly change for you.

As of next month, the only major technical programmer’s trade magazines still available in print, that I know of, are platform- and technology-specific ones like asp.netPRO and MSDN Magazine — and even those increasingly feature online-only content. For example, from MSDN Mag’s January 2009 editor’s note:

As we continue to grow our coverage to keep pace with the rapidly expanding set of technologies, we will often offer content exclusively online at msdn.microsoft.com/magazine. So please check in frequently!

Gotta love RSS (and Atom etc.): Every text feed is like a magazine or newspaper column, every blogger a columnist. Every audio/video podcast feed is like a radio or TV series, or a radio station or TV channel. And our feed reader is the new magazine/newspaper, as we subscribe to columnists to make our personal custom newsmagazine. But RSS readers, along with RSS-consuming clients like iTunes, are more — they’re our personal selection, not only of the columns we want to read (on whatever topics we want, including the funnies section), but also of the media we want to hear and watch. It’s increasingly our way to choose the text, audio, and video we want all together. Who knew that a large chunk of the coming media convergence would come in the shape of RSS readers?

And other media are feeling the pressure from Dr. Web, the new gorilla:

  • This week (late December 2008), even cable operator Time Warner pushed web delivery for TV. Time Warner has had various contract disputes with Viacom and some local stations. But as part of the dispute:
      Time Warner will respond to Viacom’s advertisement, [Time Warner spokesman] Mr. Dudley said, by highlighting the availability of television content on the Internet.

      “We will be telling our customers exactly where they can go to see these programs online,” Mr. Dudley said. “We’ll also be telling them how they can hook up their PCs to a television set.”

  • During October-December 2008, Netflix’s Watch Instantly has started to turn into a juggernaut. It’s interesting enough that you can watch 12,000+ movies and TV shows streamed over the net to your PC at high quality. As of October 2008, you can get them on your Mac. As of November 2008, on your Xbox. As of December 2008, on your TiVo. As noted in one recent article:
      “It’s a good strategic move,” said Andy Hargreaves, an analyst with Pacific Crest Securities. “Netflix sees the world will go digital, no matter what they do. They realize there is more to be lost by waiting than doing it early.”

And I won’t even get started on SaaS: hosted rich GUI apps served up on the web, for example the December 2008 (again) announcement about Office Online.

Yes, don’t forget 2008, especially December 2008: The month the first major newspapers moved mostly to the web and abandoned print at least partly; the month that PC Magazine and Dr. Dobb’s suspended print publication and went web-only; the month Netflix Watch Instantly arrived in the living room on TiVos after hitting Xboxes a fortnight before; the month cable provider Time Warner threatened to tell people to watch TV on the net; and the month even Microsoft announced their intent to deliver significant Office web applications.

I for one welcome Dr. Web, our new Gorilla and media overlord!

Besides, what choice do I have?

Rich-GUI SaaS/Web 2.0 Apps Should Not Be Considered Harmful

Yesterday, the ever-popular Jeff Atwood (of Coding Horror fame) wrote an article [*] on how not to write Web 2.0 UIs. Unfortunately, it’s exactly backwards: What he identifies as a problem is in fact not only desirable, but necessary.

  • [*] Aside: Jeff, I know you love pictures, but is that particular gratuitous one really necessary? Yes, I know it’s CGI, but it made me really hesitate about linking to your post and has nothing to do with your technical point.

Jeff observes correctly that when you write an application to run on a platform like Windows and/or OS X, your application should follow the local look-and-feel. Fine so far. But he then repeats a claim that I believe is incorrect, at least today, and based on a fallacy — and adds another fallacy:

[Quoting Bill Higgins]

  • … a Windows application should look and feel like a Windows application, a Mac application should look and feel like a Mac application, and a web application should look and feel like a web application.

Bill extends this to web applications: a web app that apes the conventions of a desktop application is attempting to cross the uncanny valley of user interface design. This is a bad idea for all the same reasons; the tiny flaws and imperfections of the simulation will be grossly magnified for users.

  • When you build a “desktop in the web browser”-style application, you’re violating users’ unwritten expectations of how a web application should look and behave.

There are actually two fallacies here.

Fallacy #1: “Look and feel like a web application”

The first fallacy is here:

a web application should look and feel like a web application.

violating users’ unwritten expectations of how a web application should look and behave.

These assertions beg the question: What does a web application “look and feel like,” and what do users expect? Also, are you talking about Web 1.0, where there is an answer to these questions, or Web 2.0, where there isn’t?

For Web 1.0 applications, the answer is fairly easy: They look like hyperlinked documents built on technologies like HTML and CSS. That’s what people expect, and get.

But the examples Bill uses aren’t Web 1.0 applications, they’re Web 2.0 applications. For Web 2.0 applications, there are no widely accepted UI standards, and applications are all over the map. Indeed, the whole point of Ajax-y/Web2.0-y applications is to get beyond the current 1.0-standard functionality.

Not only are there are no widely-accepted UI standards, there aren’t even many widely-accepted UI technologies. Consider how many dissimilarities there are among just Flash, Silverlight, and JavaFX as these technologies compete for developer share. Then consider that even within any one of these technologies people actually build wildly diverse interfaces.

Here’s the main example these bloggers used:

Consider the Zimbra web-based email that Bill refers to.

zimbra email

It’s pretty obvious that their inspiration was Microsoft Outlook, a desktop application.

outlook email

But what’s wrong with Zimbra?

Here’s a Better Question #1: How could you do better?

And for bonus points, Still Better Question #2: What about OWA? Consider that Microsoft already provides essentially the same thing, with the same approach, in the form of Outlook Web Access, which looks remarkably like the usual Outlook [caveat: this is an example of why I write above and below that ‘most’ Web 2.0 apps don’t try to emulate a particular OS look-and-feel; this one does]. For example (a couple of sample shots taken from this brief video overview):

  • image
    image

The rich UI isn’t a bug, it really is a feature — a killer feature we’re going to be seeing more of, not less of, because this is what delivering software-as-a-service (SaaS) is all about. Although I use the desktop Outlook most of the time, I like OWA and think it’s the best web-based email and calendaring I’ve seen especially when I’m away from my machine (and I’ve tried several others, though granted you do need to be using Exchange). I suspect its UI conventions are probably pretty accessible even to non-Windows users, though that’s probably debatable and your mileage may vary.

Fallacy #2: “A web app that apes the conventions of a desktop application is “

The second fallacy is Jeff’s comment (boldface original):

a web app that apes the conventions of a desktop application is attempting to cross the uncanny valley of user interface design.

I think this is flawed for three reasons.

First, the “uncanny valley” part makes the assumption that people will find it jarring that the app tries to look like a desktop application. (This concern is related to, but different from, the concern of fallacy #1.) But most such apps aren’t doing that at all, because they know their users will access them from PCs, Macs, and lots of different environments, and they have to look reasonable regardless of the user’s native environment. They’re usually not trying to duplicate a given platform’s native look and feel.

Second, what they are doing is borrowing from UI components and conventions that already work well in desktop GUI environments, and are common across many of those environments. When you have no standards for Web 2.0 look and feel, then doing the best you can by borrowing from ideas we already know work pretty well isn’t just okay, it’s necessary. What else can you do?

Finally, the worst part is this: The whole point of SaaS is to deliver desktop-like rich-GUI applications on the web. So what is being labeled ‘wrong’ above is the whole point of what we’re doing as an industry.

“SaaS/Web 2.0 on Web 1.0”: The new “GUI on DOS”

Most SaaS/Web 2.0 applications today look and feel pretty much the way GUI applications looked and felt like on DOS, before technologies like Windows and OS/2 PM existed. Around the late 1980s, people wrote lots of GUI applications that ran on DOS, but we didn’t have a widely-used common GUI infrastructure that handled basic windows and menus and events, much less standards like CUA that tried to say how to use such a common infrastructure if we had it. So they each did their own thing, borrowing where possible from what seemed to work well for GUIs on other platforms.

Twenty years ago, everyone writing GUIs on DOS designed the UIs as best they could, borrowing where possible from what they saw worked on platforms like the Macintosh and Xerox Alto and Star — but the results were all over the map, and would stay that way until a standard environment, followed by standard guidelines, came into being.

Today, everyone writing rich Web 2.0 applications is doing their own thing, borrowing as best they can from Macs and Windows and others — but the results are all over the map, and will continue to be until there actually is such a thing as a UI standard for rich-GUI web applications. You can see that in the differences between Zimbra and Outlook Web Access. In the meantime, it’s not just okay to borrow from what we’ve learned on the desktop; it’s necessary.

And the question isn’t whether metaphors users already understand on the desktop will migrate to the web, but which ones and how soon, because it’s the whole point of SaaS. The industry will soon be going well beyond Google Apps; with offerings like Office Online already announced for the short term, which puts still more rich-client GUI apps like word processors and spreadsheets in the browser (with functionality somewhere between Google Apps and the desktop version of Office).

Zimbra and Outlook Web Access aren’t examples of poor web app design, but exactly the opposite: They’re just the beginning of the next wave of rich web apps.

September 2008 ISO C++ Standards Meeting: The Draft Has Landed, and a New Convener

The ISO C++ committee met in San Francisco, CA, on September 15-20. You can find the minutes here, including the votes to approve papers.

The most important thing the committee accomplished was this:

Complete C++0x draft published for international ballot

The biggest goal entering this meeting was to make C++0x feature-complete and stay on track to publish a complete public draft of C++0x for international review and comment — in ISO-speak, an official Committee Draft or CD. As I predicted in the summer, the committee achieved that at this meeting. Now the world will know the shape of C++0x in good detail. Here’s where to find it: The September C++0x working draft document is essentially the same as the September 2008 CD.

This is “it”, feature-complete C++0x, including the major feature of “concepts” which had its own extensive set of papers for language and library extensions — I’ll stop there, but there are still more concepts papers at the mailing page and some more still to come during the CD phase. (If you get the impression that concepts is a big feature, well, it is indeed easily the biggest addition we made in C++0x.)

What’s next? As I’ve mentioned before, we’re planning to have two rounds of international comment review. The first of two opportunities for national bodies to give their comments is now underway; the second round will probably be this time next year. The only changes expected to be made between that CD and the final International Standard are bug fixes and clarifications. It’s helpful to think of a CD as a feature-complete beta, and we’re on track to ship one more beta before the full release.

And a new convener

On a personal note, I’m very happy to see this accomplished at the last meeting during my convenership. I’ve now served as secretary and then convener (chair) of the ISO C++ committee for over 10 years, and my second three-year term as convener ended one week after the San Francisco meeting. A decade is enough; I decided not to volunteer for another term as chair.

As of a few weeks ago, P. J. Plauger is the new convener of ISO/IEC JTC1/SC22/WG21 (C++). Many of you will know P.J. (or Bill, as he’s known within the committee) from his long service to the C and C++ communities, including that he has been a past convener of the ISO C standards committee, past editor of the C/C++ Users Journal, the principal author of the Dinkumware implementation of the C++ standard library, and recipient of the 2004 Dr Dobb’s Journal Excellence in Programming Award, among various other qualifications and honors. He has been a regular participant at ISO C++ meetings for about as long as they’ve been held, and his long experience with both the technology and the ISO standards world will serve WG21 well.

I’m very happy to have been able to chair the committee during the development of C++0x. Now as we move from “develop mode” into “ship mode” it will be great to have his experienced hand guiding the committee through the final ISO process. Thanks for volunteering, Bill!

Stroustrup & Sutter on C++ 2008, Second Showing: October 30-31, 2008, in Boston, MA, USA

Bjarne and Herb for S&S This spring at SD West in Santa Clara, Bjarne and I did a fresh-and-updated S&S event with lots of new material.

We don’t usually repeat the same material, but this time there’s been such demand that we agreed to do a repeat… four weeks from today, in Boston. More information and talk descriptions follow.

CONTENT ADVISORY

Again, usually our S&S events feature mostly new material, but this one is almost identical to the material we did in spring 2008 in Santa Clara. Bjarne is substituting one talk, and will present “C++ in Safety-Critical Systems” instead of his talk on C++’s design and evolution; and we’ll both be updating to the material to reflect the current state of the draft ISO C++0x standard. But otherwise it’ll be identical.

So:

  • If you missed our event this spring, here’s your second chance! It was our highest-rated S&S ever, and in the post-conference survey we asked the question “Would you recommend this course to a colleague?” and 100% said yes.
  • If you already attended this spring and came to all our sessions, you’ve seen nearly all this material already, but feel free to encourage a friend or colleague to attend who you think would benefit from the material.

The Talks

Wednesday, October 29, 2008

C++0x Overview (Bjarne Stroustrup)

Safe Locking: Best Practices to Eliminate Race Conditions (Herb Sutter)

How to Design Good Interfaces (Bjarne Stroustrup)

Lock-Free Programming in C++—or How to Juggle Razor Blades (Herb Sutter)

Grill the Experts: Ask Us Anything! (Bjarne Stroustrup & Herb Sutter)

Thursday, October 30, 2008

[“Best of Stroustrup & Sutter”] Update of talk voted “Most Informative” at S&S 2007: Concepts and Generic Programming in C++0x (Bjarne Stroustrup)

What Not to Code: Avoiding Bad Design Choices and Worse Implementations (Herb Sutter)

C++ in Safety-Critical Systems (Bjarne Stroustrup)

How to Migrate C++ Code to the Manycore “Free Lunch” (Herb Sutter)

Discussion on Questions Raised During the Seminar (Herb Sutter & Bjarne Stroustrup)

Registration

This two-day seminar is getting billing on two different conferences that are running at the same time in the Hynes Convention Center: SD Best Practices and the Embedded Systems Conference. S&S is technically part of both conferences, which means you can attend S&S via either one… either

and both ways will include options that include our two-day seminar.

I look forward to seeing many of you in Boston! Best wishes,

Herb

Data and Perspective

Even genuinely newsworthy topics can get distorted when commentators exaggerate or use data selectively. Here are two recent examples I noticed.

“This is the worst financial crisis since the Great Depression.” It’s true that it’s bad and even historic, and this sound bite correctly doesn’t actually claim it’s as bad as the Depression. I hope it doesn’t turn out to be in the same league as that; then, people were lining up at soup kitchens. For now, however, Apple is still on track to sell 10 million iPhone 3Gs this year, which says something.

“Yesterday [Monday, September 29, 2008] saw the worst single-day plunge in Dow Jones history.” “It’s a new Black Monday.” Well, these got my attention, because I remember Black Monday on October 19, 1987 very well. I was working in IT at a major bank, doing software application support for traders and related departments. When I went up to the trading floor that day, I immediately knew something was badly wrong because of the eerie sound as the elevator doors opened — a sound you never hear during trading hours, and certainly not from a room full of traders standing at their desks: silence.

Yesterday’s loss of 777 points was stunning as the largest single-day point loss in Dow history. But as a percentage loss that’s not even in the top 10 Bad Dow Days, all but two of which occurred before 1935. Those two since the Depression occurred on October 19 and 26, 1987, when the Dow lost 22.6% and then another 8% of its total value in single sessions, respectively #2 and #9 on All-Time Bad Dow Days list. For perspective, as of this writing the Dow is down 19.8% so far this entire year, and it surely hasn’t been a good year. Today’s crisis is already historic and could well get worse yet, of course, but some of us do remember some pretty bad ones in the past.

As good old Sam Clemens said (approximately), there are lies, darned lies, and statistics. Even when the statistics are true, always cross-check them for perspective. Even the best news and the worst news can be overstated, and viewing the same data from multiple angles helps ensure we understand it properly.

 

[Edited 9/30 to add: At 22.6%, Black Monday in 1987 was actually the worst Dow Day ever if you don’t count “reopening days” after unusual market closures when the markets catch up with events that happened while they were closed. So when was the all-time worst Dow Day ever? Perhaps surprisingly, the answer is not in 1929, though several of the all-time top 10 were in that year. Rather, it was December 12, 1914, when the Dow dropped 24.4% after the markets reopened after being closed entirely for over four months due to the outbreak of World War I. (The markets closed on July 30, two days after Austria-Hungary declared war and a day before Germany did.) That helps put in perspective just how bad October 1987 was, and of course that today’s crisis is also pretty bad even if it hasn’t beaten those prior records, yet.]

Effective Concurrency: Lock-Free Code — A False Sense of Security

DDJ posted the next Effective Concurrency column a couple of weeks earlier than usual. You can find it here: “Lock-Free Code: A False Sense of Security”, just went live on DDJ’s site, and also appears in the print magazine.
 
This is a special column in a way, because I rarely critique someone else’s published code. However, mere weeks ago DDJ itself published an article with fundamentally broken code that tried to show how to write a simplified lock-free queue. I corresponded with the author, Petru Marginean, and his main reviewer, Andrei Alexandrescu, to discuss the problems, and they have patched the code somewhat and added a disclaimer to the article. But the issues need to be addressed, and so Petru kindly let me vivisect his code in public in this column (and the next, not yet available, which will show how to do it right).
 
From the article:

[Lock-free code is] hard even for experts. It’s easy to write lock-free code that appears to work, but it’s very difficult to write lock-free code that is correct and performs well. Even good magazines and refereed journals have published a substantial amount of lock-free code that was actually broken in subtle ways and needed correction.

To illustrate, let’s dissect some peer-reviewed lock-free code that was published here in DDJ just two months ago …

I hope you enjoy it. (Note: Yes, the title is a riff on Tom Cargill’s classic article “Exception Handling: A False Sense of Security”, one of Scott Meyers’ five “Most Important C++ Non-Book Publications…Ever”.)
 
Finally, here are links to previous Effective Concurrency columns (based on the magazine print issue dates):
August 2007 The Pillars of Concurrency
September 2007 How Much Scalability Do You Have or Need?
October 2007 Use Critical Sections (Preferably Locks) to Eliminate Races
November 2007 Apply Critical Sections Consistently
December 2007 Avoid Calling Unknown Code While Inside a Critical Section
January 2007 Use Lock Hierarchies to Avoid Deadlock
February 2008 Break Amdahl’s Law!
March 2008 Going Superlinear
April 2008 Super Linearity and the Bigger Machine
May 2008 Interrupt Politely
June 2008 Maximize Locality, Minimize Contention
July 2008 Choose Concurrency-Friendly Data Structures
August 2008 The Many Faces of Deadlock
September 2008 “Lock-Free Code: A False Sense of Security”

Constructor Exceptions in C++, C#, and Java

I just received the following question, whose answer is the same in C++, C#, and Java.

Question: In the following code, why isn’t the destructor/disposer ever called to clean up the Widget when the constructor emits an exception? You can entertain this question in your mainstream language of choice:

// C++ (an edited version of the original code
// that the reader emailed me)
class Gadget {
   Widget* w;

public:
   Gadget() {
     w = new Widget();
     throw new exception();
     // … or some API call might throw
   };

   ~Gadget() {
      if( w != nullptr ) delete w;
   }
};

// C# (equivalent)
class Gadget : IDisposable {
   private Widget w;

   public Gadget() {
     w = new Widget();
     throw new Exception();
     // … or some API call might throw
   };

   public void Dispose() {
     // … eliding other mechanics, eventually calls:
     if( w != null ) w.Dispose();
     // or just “w = null;” if Widget isn’t IDisposable
   }
};

// Java (equivalent)
class Gadget {
   private Widget w;

   public Gadget() {
     w = new Widget();
     throw new Exception();
     // … or some API call might throw
   };

   public void dispose() {
     if( w != null ) w.dispose();
     // or just “w = null;” if Widget isn’t disposable

   }
};

Interestingly, we can answer this even without knowing the calling code… but there is typical calling code that is similar in all three languages that reinforces the answer.

Construction and Lifetime

The fundamental principles behind the answer is the same in C++, C#, and Java:

  1. A constructor conceptually turns a suitably sized chunk of raw memory into an object that obeys its invariants. An object’s lifetime doesn’t begin until its constructor completes successfully. If a constructor ends by throwing an exception, that means it never finished creating the object and setting up its invariants — and at the point the exceptional constructor exits, the object not only doesn’t exist, but never existed.
  2. A destructor/disposer conceptually turns an object back into raw memory. Therefore, just like all other nonprivate methods, destructors/disposers assume as a precondition that “this” object is actually a valid object and that its invariants hold. Hence, destructors/disposers only run on successfully constructed objects.

I’ve covered some of these concepts and consequences before in GotW #66, “Constructor Failures,” which appeared in updated and expanded form as Items 17 and 18 of More Exceptional C++.

In particular, if Gadget’s constructor throws, it means that the Gadget object wasn’t created and never existed. So there’s nothing to destroy or dispose: The destructor/disposer not only isn’t needed to run, but it can’t run because it doesn’t have a valid object to run against.

Incidentally, it also means that the Gadget constructor isn’t exception-safe, because it and only it can clean up resources it might leak. In the C++ version, the usual simple way to write the code correctly is to change the w member’s type from Widget* to shared_ptr<Widget> or an equivalent smart pointer type that owns the object. But let’s leave that aside for now to explore the more general issue.

A Look At the Calling Code

Next, let’s see how these semantics are actually enforced, whether by language rules or by convention, on the calling side in each language. Here are the major idiomatic ways in each language to use an Gadget object in an exception-safe way:

// C++ caller

{
  Gadget myGadget;
  // do something with myGadget
}

// C# caller

using( Gadget myGadget = new Gadget() )
{
  // do something with myGadget
}

// Java caller

Gadget myGadget = new Gadget();

try {
  // do something with myGadget
}
finally {
  myGadget.dispose();
}

Consider the two cases where an exception can occur:

  • What if an exception is thrown while using myGadget — that is, during “do something with myGadget”? In all cases, we know that myGadget’s destructor/dispose method is guaranteed to be called.
  • But what if an exception is thrown while constructing myGadget? Now in all cases the answer is that the destructor/dispose method is guaranteed not to be called.

Put another way, we can say for all cases: The destructor/dispose is guaranteed to be run if and only if the constructor completed successfully.

Another Look At the Destructor/Dispose Code

Finally, let’s return to each key line of code one more time:

// C++
      if( w != nullptr ) delete w;
// C#
     if( w != null ) w.Dispose();
// Java
     if( w != null ) w.dispose();

The motivation for the nullness tests in the original example was to clean up partly-constructed objects. That motivation is suspect in principle — it means the constructors aren’t exception-safe because only they can clean up after themselves — and as we’ve seen it’s flawed in practice because the destructors/disposers won’t ever run on the code paths that the original motivation cared about. So we don’t need the nullness tests for that reason, although you might still have nullness tests in destructors/disposers to handle ‘optional’ parts of an object where a valid object might hold a null pointer or reference member during its lifetime.

In this particular example, we can observe that the nullness tests are actually unnecessary, because w will always be non-null if the object was constructed successfully. There is no (legitimate) way that you can call the destructor/disposer (Furthermore, C++ developers will know that the test is unnecessary for a second reason: Delete is a no-op if the pointer passed to it is null, so there’s never a need to check for that special case.)

Conclusion

When it comes to object lifetimes, all OO languages are more alike than different. Object and resource lifetime matters, whether or not you have a managed language, garbage collection (finalizers are not destructors/disposers!), templates, generics, or any other fancy bell or whistle layered on top of the basic humble class.

The same basic issues arise everywhere… the code you write to deal with them is just spelled using different language features and/or idioms in the different languages, that’s all. We can have lively discussions about which language offers the easiest/best/greenest/lowest-carbon-footprint way to express how we want to deal with a particular aspect of object construction, lifetime, and teardown, but those concerns are general and fundamental no matter which favorite language you’re using to write your code this week.

Research Firms Are Good At Research, Not Technology Predictions

This story has been picked up semi-widely since last night. I’m sure this Steven Prentice they quote is a fine (Gartner) Fellow, but really:

The computer mouse is set to die out in the next five years and will be usurped by touch screens and facial recognition, analysts believe.

Seriously, does anyone who uses computers daily really believe this kind of prediction just because someone at Gartner says so? Dude, sanity check: 1. What functions do you use your mouse for? 2. How many of those functions can be done by pointing at your screen or smiling at the camera: a) at all; and b) with equivalent high precision and low arm fatigue? Of course the mouse, including direct equivalents like the touchpad/trackpad, will be replaced someday. But to notice that people like to turn and shake their Wii controllers and iPhones and then make the leap to conclude that this will replace mice outright in the short term seems pretty thin even for Gartner.

When you read a report from Gartner, Forrester, IDC and their brethren research firms, remember that you’re either getting real-world data (aka research) or a single analyst’s personal predictions (aka crystal-ball gazing). Research firms are good at what they’re good at, namely research:

  • They’re “decent” at compiling current industry market data. Grade: A.
  • They’re “pretty okay” when they limit themselves to simple short-term extrapolation of that data, such as two-year projections of cost changes of high-speed networking in Canada or cell phone penetration in India. Grade: A-.

But when they try bigger technology movement predictions like “X will replace Y in Z years” they average somewhere around “spotty,” and on their off days they dip down into “I think you forgot to sanity check that sound bite” territory. It’s a pity that some venture capitalists take the research analysts’ word as gospel. Reliability of technology shift predictions: D+.