Where can you get the ISO C++ standard, and what does “open standard” mean?

In my role as convener of the ISO C++ committee, I get to field a number of questions about the committee and its process. It occurred to me that some of them might be of more general interest, so I’ll occasionally publish an edited version of my reply here in case other people have similar questions. Note that the quoted questions may be paraphrased.

Today’s question comes from someone who recently asked about why, if ISO C++ is an open standard, ISO charges for it and we can’t just download it for free.

The short answer is that people sometimes confuse “open” with “free.” ISO standards aren’t “open” like the O in FOSS, they’re “open” like “not developed behind closed doors.” Anyone who wants to pay for membership in their national body (if their country participates in ISO and in the specific project in question) is able to come join the fun. In free-as-in-beer terms, this means that experts are welcome to come to the ISO brewery at their own expense and volunteer their time to help brew the beer, and then when the beer is ready the customers still pay ISO to drink it (the helpers don’t get a cut of that, only a free bottle for their personal use and the satisfaction of having brewed a mighty fine keg).

Longer answer follows:

ISO C++ claims to be an OPEN STANDARD. Where can I download the OPEN STANDARD for ISO C++.

All published ISO standards are available for sale from the ISO store, via http://iso.org. You can purchase a copy of the latest currently published C++ standard, 14882:2003, here for CHF 380:

Also, your national standards body may sell a copy. For example, the ANSI store sells a PDF version here for US$ 30:

However, before you buy one of those, note that we’ve been actively working on a new revision to that standard, and hope to be done in the next year or so. You can get a free copy of the latest (but incomplete) draft of the C++ standard here:

Note that this is a working draft as of a couple of weeks ago; it is not the standard currently in force, and it is not exactly the standard that will be published in the next year or two, but it is a draft of the latter that’s in pretty good shape but will still get some editorial corrections and technical tweaks.

By the way, what does it mean, that the STANDARD of a widely used language is OPEN? Especially if I have to pay for it?

All ISO standards are "open standards" in that they’re developed in an open, inclusive process. All member nations of ISO are eligible to participate, send experts, contribute material, vote on ballots, and so forth. Additionally, some working groups, including C and C++, make all of their papers and all working drafts freely available on the web, as with the link above; the only thing a working group is not allowed to make freely available, except with special permission from ISO, is the text of the final standard it produces, because ISO reserves the right to charge for that.

Could you explain to me, what should it mean if the STANDARD of a widely used language was CLOSED?

Generally, that means it was developed privately by some closed industry group or consortium that not everyone is allowed to join and participate in. Some standards are developed behind closed doors controlled by some company or companies. ISO standards are not like that.

Best wishes,

Herb

Effective Concurrency Europe 2010

Last May, I gave a public Effective Concurrency course in Stockholm. It was well-attended, and a number of people have asked if it will be offered again. The answer is yes.

I’m happy to report that Effective Concurrency Europe 2010 will be held on May 5-7, 2010, in Stockholm, Sweden. There’s an early-bird rate available for those who register before March 15.

I’ll cover the following topics:

  • Fundamentals: Define basic concurrency goals and requirements • Understand applications’ scalability needs • Key concurrency patterns
  • Isolation — Keep work separate: Running tasks in isolation and communicate via async messages • Integrating multiple messaging systems, including GUIs and sockets • Building responsive applications using background workers • Threads vs. thread pools
  • Scalability — Re-enable the Free Lunch: When and how to use more cores • Exploiting parallelism in algorithms • Exploiting parallelism in data structures • Breaking the scalability barrier
  • Consistency — Don’t Corrupt Shared State: The many pitfalls of locks–deadlock, convoys, etc. • Locking best practices • Reducing the need for locking shared data • Safe lock-free coding patterns • Avoiding the pitfalls of general lock-free coding • Races and race-related effects
  • High Performance Concurrency: Machine architecture and concurrency • Costs of fundamental operations, including locks, context switches, and system calls • Memory and cache effects • Data structures that support and undermine concurrency • Enabling linear and superlinear scaling
  • Migrating Existing Code Bases to Use Concurrency
  • Near-Future Tools and Features

I hope to see some of you there!

Igor Ostrovsky and the Seven Cache Effects

My colleague Igor Ostrovsky has written a useful summary of seven cache memory effects that every advanced developer should know about because of their performance impact, particularly as we strive to keep invisible bottlenecks out of parallel code.

I’ve covered variations of Igor’s examples #1, #2, #3, and #6 in my Machine Architecture talk and several of my articles. His article provides a crisp and concise summary of these and three more kinds of cache effects along with simple and clear sample code and intriguing measurements (for example, see the detail in the graph for #5 and its analysis).

Recommended.

Effective Concurrency: Prefer Futures to Baked-In “Async APIs”

This month’s Effective Concurrency column, Prefer Futures to Baked-In “Async APIs”, is now live on DDJ’s website.

From the article:

When designing concurrent APIs, separate "what" from "how"

Let’s say you have an existing synchronous API function [called DoSomething]… Because DoSomething could take a long time to execute (whether it keeps a CPU core busy or not), and might be independent of other work the caller is doing, naturally the caller might want to execute DoSomething asynchronously. …

The question is, how should we enable that? There is a simple and correct answer, but because many interfaces have opted for a more complex answer let’s consider that one first.

I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns:

The Pillars of Concurrency (Aug 2007)

How Much Scalability Do You Have or Need? (Sep 2007)

Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)

Apply Critical Sections Consistently (Nov 2007)

Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)

Use Lock Hierarchies to Avoid Deadlock (Jan 2008)

Break Amdahl’s Law! (Feb 2008)

Going Superlinear (Mar 2008)

Super Linearity and the Bigger Machine (Apr 2008)

Interrupt Politely (May 2008)

Maximize Locality, Minimize Contention (Jun 2008)

Choose Concurrency-Friendly Data Structures (Jul 2008)

The Many Faces of Deadlock (Aug 2008)

Lock-Free Code: A False Sense of Security (Sep 2008)

Writing Lock-Free Code: A Corrected Queue (Oct 2008)

Writing a Generalized Concurrent Queue (Nov 2008)

Understanding Parallel Performance (Dec 2008)

Measuring Parallel Performance: Optimizing a Concurrent Queue (Jan 2009)

volatile vs. volatile (Feb 2009)

Sharing Is the Root of All Contention (Mar 2009)

Use Threads Correctly = Isolation + Asynchronous Messages (Apr 2009)

Use Thread Pools Correctly: Keep Tasks Short and Nonblocking (Apr 2009)

Eliminate False Sharing (May 2009)

Break Up and Interleave Work to Keep Threads Responsive (Jun 2009)

The Power of “In Progress” (Jul 2009)

Design for Manycore Systems (Aug 2009)

Avoid Exposing Concurrency – Hide It Inside Synchronous Methods (Oct 2009)

Prefer structured lifetimes – local, nested, bounded, deterministic (Nov 2009)

Prefer Futures to Baked-In “Async APIs” (Jan 2010)

C++ and Beyond: Summer 2010, Vote the Date

I always enjoy teaching together with Scott Meyers and Andrei Alexandrescu, not only because it means I get to work with good friends, but also because I get to listen to them speak. Scott and Andrei always have interesting and useful things to say and say them well. We occasionally speak at the same big conferences, but at those we’re often scheduled at the same time and so we don’t get to hear each other’s sessions (and attendees likewise have to choose one competing session or the other for that time slot), and presenting in large rooms to hundreds of people makes it hard to get much quality face time with the individual audience members.

So I’m really looking forward to spending three days together with Scott and Andrei and a limited number of attendees at a new event called C++ and Beyond:

What do C++ programmers think about these days? Perhaps the new features from C++0x that are becoming commonly available and that introduce fundamental changes in how C++ software is designed. Perhaps the increasing importance of developing effective concurrent systems. Perhaps the continuing pressure to create high-performance software. Possibly the impact of new systems programming languages such as D. Most likely, all of the above, and more.

C++ legends Scott Meyers, Herb Sutter, and Andrei Alexandrescu think about these things, too — all the time. Scott and Herb are neck-deep in C++0x, while Andrei is literally writing the book on D (The D Programming Language). Herb and Andrei put the pedal to the metal on applied concurrency and parallelism; Herb is writing the book on that topic (Effective Concurrency). All three focus on the development of high-performance systems, a topic Scott’s writing a book about (Fastware!).

This summer, Scott, Herb, and Andrei will host an intensive three-day technical event focusing on “C++ and Beyond” — an examination of issues related to C++ and its application in high-performance (typically highly concurrent) systems, as well as related technologies of likely interest to C++ programmers.

We know it’ll be in the Seattle area. We know it’ll be this summer. What we don’t know is which of the two candidate dates (in June or August) works best for you, so we thought we’d let you decide: If you’re potentially interested in attending, please vote for your preferred date. At that page you can also let us know what kinds of topics you’d like to see covered.

Once we finalize the details we’ll post them and open registration. You can subscribe to follow announcements via this feed.

We hope to see you in the beautiful Pacific Northwest this summer.

Stroustrup on Teaching Software Developers

Recommended reading (it’s short), from the January 2010 issue of CACM:

What Should We Teach New Software Developers? Why?
by Bjarne Stroustrup

It’s a wonderfully accurate and concise summary of the disconnect between the ivory tower and the trenches – i.e., (some) computer science academics and (some) software development industry managers, with commentary on other topics like why or why not to regulate the software development industry.

My favorite part is this from the conclusion (emphasis added):

We must do better. Until we do, our infrastructure will continue to creak, bloat, and soak up resources. Eventually, some part will break in some unpredictable and disastrous way (think of Internet routing, online banking, electronic voting, and control of the electric power grid).

Bjarne isn’t prone to disaster warnings (indeed, this is the first time I recall seeing a comment like this from him, even over a beer or three), but he’s right. This hits directly on an issue I’ve also been giving thought to in recent years: As an industry and a society, we routinely underestimate the degree to which we’ve gradually allowed our automated civilization to become reliant on computers and software, and vulnerable as a result. We’ve been satisfied with making any given system “reliable enough” for the intended application (e.g., having a much higher bar for life-critical software than for a word processor), and so far we’ve been able to get away with that without the level of broad regulation for software development that is routinely required for other disciplines like engineering that are involved in providing essential products and services. Of course, as Bjarne points out, one reason for the lack of regulation of the software development industry is that we don’t know (and/or can’t agree on) exactly what to require and how to measure it; we’re just not as mature a field yet as civil or mechanical engineering.

Recognizing the potential scope of a catastrophic and systemic software failure in the field – one that disables a vital piece of infrastructure (say, electric power or food distribution, country-wide for a month) and that can’t be patched with a remote update – adds impetus to understanding and solving the kinds of issues Bjarne writes about.

Guest Blog: Words Matter

This morning my colleague Rob Hanz wrote an interesting email that went viral in my corner of Microsoft. He graciously allowed me to share it with you here. I hope you enjoy it too.

Blink and subconscious messaging
Robert Hanz

I was reading Blink last night, and one of the things it mentioned is how subconscious signals can significantly impact conscious activity. For instance, one experiment took jumbled, semi-random words and had the subjects arrange them into sentences. After they finished these, they would need to go down to the hall and speak to a person at an office to sign out of the test.

But the test wasn’t the actual sentences – mixed in with the words to form sentences were one of two sets of words. One set had a lot of words regarding patience and cooperation, while the other had words regarding belligerence and impatience. The test was to see the behavior of the student when the person they needed to talk to was engaged in a conversation and was unable to help them.

The group that had the belligerent words waited, on average, about 5 minutes before interrupting the conversation. The group with the patient wordset waited indefinitely. The test was designed to end after I believe 10 minutes of waiting – not a single “patient” subject ever interrupted the conversation.

Reading this, one of the things that came to mind was some of the different messaging used by waterfall projects vs. agile projects, and how often we hear them.

Waterfall:
“Bug” (used for just about anything that needs to be done by a developer)
“Priority one”
“Due by”
“Stabilization”
(this in addition to the frequently long list of “bugs” that confronts developers every day)

Agile:
“User story”
“Customer value”
“Velocity”
(in addition, the list of work to be done is usually scoped to be possible within a single week)

When I thought about the differences in words, I was really struck by how different the messaging was. In the waterfall case, the message was overwhelmingly negative – it focused on failures, urgency, and almost a sense of distrust. It seemed that the language seemed to be geared around ways that individuals messed up, and how everything is an emergency that must be dealt with immediately. And, if you think about it, in a waterfall culture there is typically no frequent reward or messaging of success – the best you can typically hope for (for much of the cycle) is to avoid failure. And the idea that you’re actually producing value is very much removed from the language.

On the other hand, the agile language itself focuses on results, not failures. Stories are typically “done” or “not done,” and while bugs are certainly tracked, at a high level that’s usually just a statement that the story is “not done.” Combine this with sprint reviews (where the team shows what they’ve accomplished), and the overall message becomes very much focused on the successes of the team, rather than the failures. Progress is measured by value added, not by failures avoided. Even something as low-level as TDD consistently gives its practitioners a frequent message of success with the “green bar.”

While I certainly believe that agile development has many advantages in terms of reducing iteration time and tightening feedback loops, among other things, is it possible that something as simple as the shift in language is also a significant part of the effectiveness? That by priming individuals with messages of success and value, rather than messages of failure, that morale and productivity can be boosted?

Trip Report: October 2009 ISO C++ Standards Meeting

The ISO C++ committee met in Santa Cruz, CA, USA on October 19-24. You can find the minutes here, which include the votes at the whole-group sessions but not the details of the breakout technical sessions where we spend most of the week.

The good news is that there’s little new technical news. We did a lot of work during the week, but it was mostly working on refining the standard, deciding integration questions of how two language features should work together in cases not clearly described, fixing bugs, and answering national body comments on our first public draft last fall (those are now nearly all answered). We expect to produce another public draft at our next meeting in March.

We did vote in one small feature that I and Lawrence Crowl in particular had been working on: a simple async() facility to launch asynchronous work easily without messing with packaged_tasks and raw threads. Here’s a sample use, also demonstrating a simple use of the futures library and a lambda function for kicks:

  future f = std::async( []{ OtherWork(); } );

  //... do our own work concurrently with OtherWork ...

  OkayNowWeNeedTheResult( f.get() );  // blocks if necessary until f is ready

If you’ve been following the futures library, you’ll notice a name change above: We also renamed unique_future<T> to just plain future<T> as part of recasting the futures wording to make it clearer and more consistent. That’s an example of the kind of cleanup work being done.

Near the end of the meeting, we also discussed deprecating export (as I reported earlier) and exception specifications other than throw()-nothing. There seemed to be significant support for deprecating both, and so we’ll probably see a concrete proposal at our next meeting.

In sad news, the convener (chair) of the committee for the past year, P.J. Plauger, stepped down at the end of the meeting. After I had been the convener for two three-year terms from 2002 to 2008, I decided it was time for someone else to have a go and so Plauger replaced me a year ago. He has done a really great job over the past year and his contributions in that role will be missed, but we won’t lose his services entirely as he remains an active participant in the committee. I will probably volunteer again to replace him.

That’s pretty much it. The next meeting of the ISO C++ standards committee is in March:

(Edited to fix “2009” in the title and add a link to the Pittsburgh meeting invitation.)

Effective Concurrency: Prefer structured lifetimes – local, nested, bounded, deterministic.

This month’s Effective Concurrency column, Prefer structured lifetimes – local, nested, bounded, deterministic, is now live on DDJ’s website.

From the article:

Where possible, prefer structured lifetimes: ones that are local, nested, bounded, and deterministic. This is true no matter what kind of lifetime we’re considering, including object lifetimes, thread or task lifetimes, lock lifetimes, or any other kind. …

I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns:

The Pillars of Concurrency (Aug 2007)

How Much Scalability Do You Have or Need? (Sep 2007)

Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)

Apply Critical Sections Consistently (Nov 2007)

Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)

Use Lock Hierarchies to Avoid Deadlock (Jan 2008)

Break Amdahl’s Law! (Feb 2008)

Going Superlinear (Mar 2008)

Super Linearity and the Bigger Machine (Apr 2008)

Interrupt Politely (May 2008)

Maximize Locality, Minimize Contention (Jun 2008)

Choose Concurrency-Friendly Data Structures (Jul 2008)

The Many Faces of Deadlock (Aug 2008)

Lock-Free Code: A False Sense of Security (Sep 2008)

Writing Lock-Free Code: A Corrected Queue (Oct 2008)

Writing a Generalized Concurrent Queue (Nov 2008)

Understanding Parallel Performance (Dec 2008)

Measuring Parallel Performance: Optimizing a Concurrent Queue (Jan 2009)

volatile vs. volatile (Feb 2009)

Sharing Is the Root of All Contention (Mar 2009)

Use Threads Correctly = Isolation + Asynchronous Messages (Apr 2009)

Use Thread Pools Correctly: Keep Tasks Short and Nonblocking (Apr 2009)

Eliminate False Sharing (May 2009)

Break Up and Interleave Work to Keep Threads Responsive (Jun 2009)

The Power of “In Progress” (Jul 2009)

Design for Manycore Systems (Aug 2009)

Avoid Exposing Concurrency – Hide It Inside Synchronous Methods (Oct 2009)

Prefer structured lifetimes – local, nested, bounded, deterministic (Nov 2009)