We Have FDIS! (Trip Report: March 2011 C++ Standards Meeting)

News flash: This afternoon, the ISO C++ committee approved the final technical changes to the C++0x standard. The new International Standard for Programming Language C++ is expected to be published in summer 2011.

The spring 2011 ISO C++ meeting was held on March 21-25 in Madrid, Spain. As previously reported, the goal of this meeting was to finish responding to national body comments on the Final Committee Draft (FCD), and to accept the last set of technical changes and approve a Final Draft International Standard (FDIS) for the final international ballot.

We reached that goal. Indeed, thanks to everyone’s hard work not just at this meeting but at and in between the meetings leading up to Madrid, we were done early enough in the week that we also got to work on resolving a number of lower-priority features and still end a day early on Friday, instead of also working all day Saturday as originally planned. (That said, it wasn’t a holiday — as usual for ISO C++ meetings, pretty much every day you could find roughly half of committee members working long past midnight in technical group sessions on particular issues and updating and reviewing proposed wording changes, then starting up again bright and early the next morning.)

Where are we in the process?

At around 16:00 Madrid time on Friday, the committee voted to approve the FDIS document, to many rounds of applause and thanks to our hosts, our project editor Pete Becker, our subgroup chairs Bjarne Stroustrup, Steve Adamczyk, Alisdair Meredith, Howard Hinnant, Lawrence Crowl, and Hans Boehm, and everyone else who has worked so hard over the last few years to bring us to this point.

The work isn’t quite done yet. The project editor now needs to update the working draft with the changes approved at this meeting, and a review committee of over a dozen volunteers will review it to help make sure those edits were made correctly. The result will be the FDIS draft. Once that happens, which we expect to take about three weeks, we will transmit the FDIS to ITTF in Geneva to kick off the final up/down international ballot which should be complete this summer.

If all goes well, and we expect it will, the International Standard will be approved and published in 2011, henceforth to be known as C++ 2011.

A word about quality

Just like the first time a decade and a half ago, this time we again took longer than we initially thought to produce the second C++ standard. Partly it was because of early ambitious feature scope, but primarily it was in the name of quality.

Perhaps the most heartening thing to me is that this standard is widely considered among committee old-timers as the highest-quality FDIS document we have shipped in the history of WG21, and we believe it to be clearly in superbly better shape than the first standard’s FDIS that we approved in November 1997 for ballot in early 1998. This time, virtually all features have actually been implemented in at least some shipping compilers, and design churn and overall design risk are significantly lower. This is particularly thanks to having shipped a large set of C++0x’s extensions first in the form of the (non-normative) Library Extensions TR (aka Library TR, aka TR1) which encouraged early vendor implementation of its features in a form that the committee could still tweak, even with breaking changes as needed, before incorporating them in an international standard.

Of course, we know there are bugs and as usual we expect to have a tail of Defect Reports (DRs, aka bug fixes and patches) to process over the next few meetings; but the tail is smaller, and many of those most involved expressed clear confidence that it will be far less than the five-year tail we had on the first standard.

But, as Josee Lajoie said so eloquently in Morristown in November 1997, and as Bjarne Stroustrup and others echoed this afternoon: “Hey, we’re done!”

Let me once again express my personal thanks and appreciation to everyone who has contributed in person and electronically to this standard. We couldn’t have done it without you. Thank you, and enjoy the moment!

Looking forward

It’s our tradition to schedule one meeting a year outside the continental United States, and preferably outside North America, because this helps international participation by making it easier for people from all parts of the world to attend. Next year, as we’ve done before, this “un-American” meeting will be the Kona meeting, which is closer for folks in eastern Asia and Australia who may wish to attend.

Here are the planned dates and locations for the remaining 2011 and 2012 ISO C++ standards committee meetings:

  • August 15-19, 2011: Bloomington, IN, USA
  • March, 2012: Kona, HI, USA
  • September, 2012: Portland, OR, USA

Herb

Book on PPL is now available

For those of you who may be interested in concurrency and parallelism using Microsoft tools, there’s a new book now available on the Visual C++ 2010 Parallel Patterns Library (PPL). I hope you enjoy it.

Normally I don’t write about other people’s platform-specific books, but I happened to be involved in the design of PPL, thought the book was nicely done, and contributed a Foreword. Here’s what I wrote to introduce the book:

This timely book comes as we navigate a major turning point in our industry: parallel hardware + mobile devices = the pocket supercomputer as the mainstream platform for the next 20 years.

Parallel applications are increasingly needed to exploit all kinds of target hardware. As I write this, getting full computational performance out of most machines—nearly all desktops and laptops, most game consoles, and the newest smartphones—already means harnessing local parallel hardware, mainly in the form of multicore CPU processing; this is the commoditization of the supercomputer. Increasingly in the coming years, getting that full performance will also mean using gradually ever-more-heterogeneous processing, from local general-purpose computation on graphics processing units (GPGPU) flavors to harnessing “often-on” remote parallel computing power in the form of elastic compute clouds; this is the generalization of the heterogeneous cluster in all its NUMA glory, with instantiations ranging from on-die to on-machine to on-cloud, with early examples of each kind already available in the wild.

Starting now and for the foreseeable future, for compute-bound applications, “fast” will be synonymous not just with “parallel,” but with “scalably parallel.” Only scalably parallel applications that can be shipped with lots of latent concurrency beyond what can be exploited in this year’s mainstream machines will be able to enjoy the new Free Lunch of getting substantially faster when today’s binaries can be installed and blossom on tomorrow’s hardware that will have more parallelism.

Visual C++ 2010 with its Parallel Patterns Library (PPL), described in this book, helps enable applications to take the first steps down this new path as it continues to unfold. During the design of PPL, many people did a lot of heavy lifting. For my part, I was glad to be able to contribute the heavy emphasis on lambda functions as the key central language extension that enabled the rest of PPL to be built as Standard Template Library (STL)-like algorithms implemented as a normal library. We could instead have built a half-dozen new kinds of special-purpose parallel loops into the language itself (and almost did), but that would have been terribly invasive and non-general. Adding a single general-purpose language feature like lambdas that can be used everywhere, including with PPL but not limited to only that, is vastly superior to baking special cases into the language.

The good news is that, in large parts of the world, we have as an industry already achieved pervasive computing: the vision of putting a computer on every desk, in every living room, and in everyone’s pocket. But now we are in the process of delivering pervasive and even elastic supercomputing: putting a supercomputer on every desk, in every living room, and in everyone’s pocket, with both local and non-local resources. In 1984, when I was just finishing high school, the world’s fastest computer was a Cray X-MP with four processors, 128MB of RAM, and peak performance of 942MFLOPS—or, put another way, a fraction of the parallelism, memory, and computational power of a 2005 vintage Xbox, never mind modern “phones” and Kinect. We’ve come a long way, and the pace of change is not only still strong, but still accelerating.

The industry turn to parallelism that has begun with multicore CPUs (for the reasons I outlined a few years ago in my essay “The Free Lunch Is Over”) will continue to be accelerated by GPGPU computing, elastic cloud computing, and other new and fundamentally parallel trends that deliver vast amounts of new computational power in forms that will become increasingly available to us through our mainstream programming languages. At Microsoft, we’re very happy to be able to be part of delivering this and future generations of tools for mainstream parallel computing across the industry. With PPL in particular, I’m very pleased to see how well the final product has turned out and look forward to seeing its capabilities continue to grow as we re-enable the new Free Lunch applications—scalable parallel applications ready for our next 20 years.

Herb Sutter
Principal Architect, Microsoft
Bellevue, WA, USA

February 2011