Concurrency Town Hall: On the web, this Monday, December 3

Earlier this fall, James Reinders and I each gave concurrency webcast as part of Intel’s fall webcast series. On Monday, James and I are going to be following that up with a virtual Town Hall panel discussion on concurrency, with Dr. Dobb’s editor Jon Erickson as our moderator. Should be cool. And you are invited to attend.

Here are the coordinates:

Date: Monday, December 3, 2007

Time: 9:00-10:00am U.S. Pacific Time


Web simulcast (doesn’t require Second Life):

I expect that most people will be watching the event via the live simulcast on Second Life Cable Network. Use that if you’re not a Second Life member, or if you are a member but can’t get in there because the SL servers are expected to reach their maximum capacity for a single event.

There’s also a network breakfast before the town hall portion. Here’s the full description, as it appears on Intel’s site:


A virtual Town Hall held in Second Life by Intel Software Network and Dr. Dobb’s Journal

December 3, 2007 8:00 AM – 11:00 AM Pacific Time (11:00 AM – 1:00 PM ET) on Intel and Dr Dobb’s Islands.

Meet Intel and Industry experts, learn about parallel programming and concurrency trends, let your voice be heard.

This half day event will begin with a networking breakfast on Dr. Dobbs Island where you will have the ability to meet some of Intel’s best engineers. Tim Mattson (Intel Principal Engineer, Designer of the 80 Core test chip parallel application and one of the creator’s of Open MP*) will give a brief introduction on our transition to a many-core world.

Following breakfast, attendees will adjourn to Intel Island for the virtual town hall. This event will be moderated by Jonathan Erickson (Editorial Director, Dr. Dobb’s) and will feature Herb Sutter (Architect, Microsoft, and chair of the ISO C++ Standards committee) and James Reinders (Chief Evangelist and Threading Guru for Intel Software Products) who will debate and discuss issues involved in realizing the promise of parallelization. As in any town Hall, your input will be vital.

After the town Hall, Peretz Stine of Intel Island and Rissa Maidstone of Dr. Dobbs will lead tours of their respective islands.

Schedule for December 3, 2007:
• 8:00-9:00 AM SLT – Networking Breakfast with Tim Mattson
• 9:00-10:00 AM SLT – Town Hall Meeting with Jon Erickson, James Reinders and Herb Sutter
• 10:00-11:00 AM SLT – Tour of Intel and Dr. Dobbs Islands

You must register to reserve a place at this event:
This event will be simulcast live on Second Life Cable Network:

Visual C++ Announcements in Barcelona: TR1 and MFC

Note: Many of you who read this blog aren’t Windows developers. I think you might still be interested in skimming this announcement, because I think it matters to the cross-platform and global C++ community when major vendors are clearly still investing heavily in C++ and C++-based technologies, despite what some naysayers proclaim.

How much does Microsoft care about C++ and non-.NET code? In August, I already blogged about

Now I’m happy to elaborate beyond just hinting, in the wake of announcements made yesterday by our Developer Division VP Soma and by our Visual C++ team. Here’s the scoop.

This month: Visual Studio 2008 ("Orcas") ships

First the comparatively minor news: We have a ship date. By the end of this month, we will ship Visual Studio 2008 ("Orcas"), including Visual C++ 2008 in particular which includes a number of cool features I’ve blogged about before, such as /MP for parallel builds on multicore machines.

But VC++ isn’t stopping there. Instead of waiting another year or two between releases, we’re immediately doing another one, gratis to VC++ 2008 customers:

First half of 2008: VS2008 Update ships

In the first half of 2008, VC++ is additionally going to ship an "expansion pack"-like update that will be available to all owners of Visual Studio 2008 Standard and above. This includes two major pieces that I know will be welcome in two major developer communities.


"TR1" is the first set of library extensions published by the C++ committee, nearly all of which have also been adopted into the next C++ standard, C++0x. Now that we know what parts seem to be sealed for inclusion in the next C++ standard and stable, we can start shipping them.

The VC++ 2008 update will include a complete implementation of all parts of TR1 that were voted into C++0x, except only for the section on functions added for compatibility with the C99 standard. The update will include features like:

  • smart pointers (e.g., shared_ptr<T>, weak_ptr<T>)
  • regular expression parsing
  • new containers (e.g., tuple, the fixed-size array<T,N>, hash-based containers like unordered_set<T>)
  • sophisticated random number generators
  • polymorphic function wrappers (e.g., function<bool(int)> and bind)
  • type traits

and a bunch more of all that good stuff lurking inside ISO C++ Library TR1. C++0x may tweak some of the interfaces, and we’ll track that as they do. In the meantime, here they are right in VC++.

A huge update to MFC

I hinted about a "huge update to MFC" back in August. Here’s the scoop…

Using this update to MFC, developers will be able to create applications with the “look and feel” of Microsoft’s own UIs:

  • Vista theme support, allowing your application to dynamically switch between themes.
  • The Office 2007 UI, including the Ribbon bar look in all its glory, with the ribbon itself, the pearl, quick access toolbar, status bar, and more.
  • The Office 2003 and XP UI, including Outlook-style shortcut bar, Office-style toolbars and menus, print preview, live font picker, color picker, etc.
  • The Internet Explorer UI, including rebars and task panes in all their glory.
  • The Visual Studio UI, with sophisticated docking functionality, auto hide windows, property grids, MDI tabs, tab groups, and more.

All in native C++ code. In MFC.

Oh, and there are also more things… for example, you can also enable your users to customize your application "on the fly" through live drag and drop of menu items and toolbar buttons; use the new Windows shell management classes to enumerate folders, drives and items, browse for folders, and more; and take advantage of many more new controls you can use right out of the box.

Want a quick tour? Get your quick overview with lots of screen shots in our own Pat Brenner’s VCblog entry, and also check out this handy Channel 9 video interview with Pat.

Implementation notes

Where did we get the code? This time, we decided to buy rather than build. The code for Office 2007 look-and-feel and other items were based on code we got from BCGSoft rather than code we developed internally.

Some people wondered about this, but it’s just an implementation detail. In particular, here are two questions I’ve seen people ask, and my personal answers (I am not speaking for Microsoft):

Q: Does it matter that this is code originally written by another company (BCGSoft)?

A: No. This update is a Microsoft Visual Studio product, full stop. With the full level of testing and support that implies, today and tomorrow.

Like most software companies, Microsoft routinely both builds its own code and integrates licensed software. We’ve long done the latter for the C++ standard library implementation (from Dinkumware), the Office proofing tools and spelling/grammar/thesaurus/hyphenation/etc. dictionaries, and mapping data for Live, to name just a few off the top.

Q: Does it matter that this isn’t the code the Office team used to implement their own look-and-feel? Why didn’t Microsoft ship Office’s own internal code?

A: No, and because we usually don’t because internal code isn’t at the level required of a product, respectively. I wonder if people realize this is the norm, not the exception, so let me explain a little.

Think of Office as the biggest Windows ISV (independent software vendor) that operates mostly as a separate company, even though it happens to be a sister division. Office has always driven the development of their own UI advances separately from Windows and Visual Studio. Windows and Visual Studio, in turn, have always tried to follow quickly in adding support for Office’s advances so that the Windows shell and other third-party applications could present the same look-and-feel wherever they wanted to, and those implementations have usually been:

  1. Later, because Office drove the innovation and the platform and tools followed.
  2. Different from Office’s own internal home-brewed versions, because there’s a huge difference between "solid for internal use" and "productized" with all the additional level of documentation and testing and external usability that requires.

The fact that we have an internal and an external "real" implementation for the same thing doesn’t matter. We’ve always done it that way, and the "real" implementation is equal and fully supported. (I think it’s funny that some worry that they’re not getting Office’s original internal ‘real’ implementation; the real "real" implementation is the one that’s not just for internal use, almost by definition!) Once a UI innovation gets into the Windows UI and/or especially the Visual Studio toolset, it’s first-class and here to stay.

Finally, I said this a huge update. What does "huge" mean? This update nearly doubles the size of MFC. Now, "nearly do
ubles the size of X" can be a bad thing. In this case, though, it’s a Good Thing… in my opinion, at least.

Beta and release availability

The update is expected to be available in beta form in January 2008, and to ship in the first half of 2008. Enjoy!

Wrapping up

Finally, let me express some well-deserved appreciation for a HUGE amount of hard work invested by the Visual C++ team, and our libraries team in particular, to make this happen and bring it to customers. Those of us who spend a good fraction of our lives on the second floor of Building 41 saw it firsthand, and know how hard everyone on the team has worked to make this happen at high quality. Thanks for that; it’s been quite a year of PM design work, coding and QA milestones, bug bars, and shiprooms.

For those who’ve asked what Visual C++ is planning beyond this month’s release, the above is an additional part of the answer. Far from the last part, however, because as Ale Contenti said in his announcement on VCblog about our plans beyond VC++ 2008:

"This is just the first step in our drive to improve the native development experience.  There’s a lot more that we’re working on, but we hope you enjoy this first milestone."

Stay tuned.

Effective Concurrency: Avoid Calling Unknown Code While Inside a Critical Section

The latest Effective Concurrency column, “Avoid Calling Unknown Code While Inside a Critical Section”, just went live on DDJ’s site, and will also appear in the print magazine. Here’s a teaser from the article’s intro:

Don’t walk blindly into a deadly embrace…

Our world is built on modular, composable software. We routinely expect that we don’t need to know about the internal implementation details of a library or plug-in to be able to use it correctly.

The problem is that locks, and most other kinds of synchronization we use today, are not modular or composable. That is, given two separately authored modules, each of which is provably correct but uses a lock or similar synchronization internally, we generally can’t combine them and know that the result is still correct and will not deadlock. There are only two ways to combine such a pair of lock-using modules safely: …

I hope you enjoy it.

2022-11 Update: has bitrotted the article, so here’s a quick rough copy-paste of the article contents.

Our world is built on modular, composable software. We routinely expect that we don’t need to know about the internal implementation details of a library or plug-in in order to be able to use it correctly.

The problem is that locks, and most other kinds of synchronization we use today, are not modular or composable. That is, given two separately authored modules each of which is provably correct but uses a lock or similar synchronization internally, we generally can’t combine them and know that the result is still correct and will not deadlock. There are only two ways to combine such a pair of lock-using modules safely:

  • Option 1 (generally impossible for code you don’t control): Each module must know about the complete internal implementation of any function it calls in the other module.
  • Option 2: Each module must be careful not to call into the other module while the calling code is inside a critical section (e.g., while holding a lock).

Let’s examine why Option 2 is generally the only viable choice, and what consequences it has for how we write concurrent code. For convenience and familiarity, I’m going to cast the problem in terms of locks, but the general case arises whenever:

  • The caller is inside one critical section.
  • The callee directly or indirectly tries to enter another critical section, or performs another blocking call.
  • Some other thread could try to enter the two critical sections in the opposite order.

Quick Recap: Deadlock

A deadlock (aka deadly embrace) can happen anytime two different threads can try to acquire the same two locks (or, more generally, acquire exclusive use of the resources protected by the same two synchronization objects) in opposite orders. Therefore anytime you write code where a thread holds one lock L1 and tries to acquire another lock L2, that code is liable to be one arm of a potential deadly embrace—unless you can eliminate the possibility that some other thread might try to acquire the locks in the other order.

We can use techniques such as lock hierarchies to guarantee that locks are taken in order. Unfortunately, those techniques do not compose either. For example, you can use a lock hierarchy and guarantee that the code you control follows the discipline, but you can’t guarantee that other people’s code will know any­thing about your lock levels, much less follow them correctly.

Example: Two Modules, One Lock Each

One fine day, you decide to write a new web browser that allows users to write plug-ins to customize the behavior or rendering of individual page elements.

Consider the following possible code, where we simply protect all the data structures representing elements on a given page using a single mutex mutPage:[3]

// Example 1: Thread 1 of a potential deadly embrace


class CoreBrowser {

  … other methods …

private void RenderElements() {

mutPage.lock();                    // acquire exclusion on the page elements

try {

for( each PageElement e on the page ) {

DoRender( e );                   // do our own default processing

plugin.OnRender( e );      // let the plug-in have a crack at it


} finally {

      mutPage.unlock();              // and then release it




Do you see the potential for deadlock? The trouble is that, if inside the call to plugin.OnRender the plug-in might acquire some internal lock of its own, then this could be one arm of a potential deadly embrace. For example, consider this plug-in implementation that just does some basic instrumentation of how many times certain actions have been performed, and it protects its internal data with a single mutex mutMyData:

class MyPlugin {

  … other methods …

public void OnRender( PageElement e ) {

mutMyData.lock(); // acquire exclusion on some internal shared data

try {

renderCount[e]++; // update #times e has been rendered

} finally {

mutMyData.unlock();         // and then release it




Thread 1 can therefore acquire mutPage and mutMyData in that order. Thread 1 is potential deadlock-bait, but the trouble will only actually manifest if one fine day some other Thread 2 that could run concurrently with the above performs something like the following:

// Example 2: Thread 2 of a potential deadly embrace


class MyPlugin {

  … other methods …

public void RefreshDisplay( PageElement e ) {

mutMyData.lock(); // acquire exclusion on some internal shared data

try {                                          // display stuff in a debug window

for( each element e we’ve counted ) {

listRenderedCount.Add( e.Name(), renderCount[e] );


textHiddenCount = browser.CountHiddenElements();

} finally {

mutMyData.unlock();         // and then release it




Notice how the plugin calls code unknown to it, namely browser.CountHiddenElements? You can probably see the trouble coming on like a steamroller:

class CoreBrowser {

  … other methods …

public int CountHiddenElements() {

mutPage.lock();                    // acquire exclusion on the page elements

try {

int count = 0;

for( each PageElement e on the page ) {

if( e.Hidden() ) count++;


return count;

} finally {

      mutPage.unlock();              // and then release it




Threads 1 and 2 can therefore acquire mutPage and mutMyData in the opposite order, and so this is a deadlock waiting to happen if Threads 1 and 2 can ever run concurrently. For added fun, note that each mutex is purely an internal implementation detail of its module that is never exposed in the interface; neither module knows anything about the internal lock being used within the other. (Nor, in a better programming world than the one we now inhabit, should it have to.)

Note: Both the CoreBrowser and MyPlugin violate the rule. For CoreBrowser, see below for workarounds. For the plug-in, it should easily move the call to browser.CountHiddenElements (which is code external to the plug-in) out before or after the critical section – it does need a lock for some of the work in the try block, but it doesn’t need the lock around all of that work, especially not the call out to unknown code.

Example: Two Modules, but Only One Has Locks

This kind of problem can arise even if both locks are in the same module, but control flow passes through another module so that you don’t know what locks are taken.

Consider the following modification, where the browser protects each page element using a separate mutex, which can be desirable to allow different parts of the page to be rendered concurrently:

// Example 3: Thread 1 of an alternative potential deadly embrace


class CoreBrowser {

  … other methods …

private void RenderElements() {

for( each PageElement e on the page ) {

e.mut.lock();                        // acquire exclusion on this page element

try {

DoRender( e );                   // do our own default processing

plugin.OnRender( e );      // let the plug-in have a crack at it

} finally {

        e.mut.unlock();  // and then release it





And consider a plug-in that does no locking of its own at all:

class MyPlugin {

  … other methods …

public void OnRender( PageElement e ) {

GuiCoord cPrev = browser.GetElemPosition( e.Previous() );

GuiCoord cNext = browser.GetElemPosition( e.Next() );

Use( e, cPrev, cNext );           // do something with the coordinates



But which calls back into:

class CoreBrowser {

  … other methods …

public GuiCoord GetElemPosition( PageElement e2 ) {

e2.mut.lock();                        // acquire exclusion on this page element

try {

return FigureOutPositionFor( e2 );

} finally {

      e2.mut.unlock();  // and then release it




The order of mutex acquisition is:

for each element e

  acquire e.mut

    acquire e.Previous().mut

release e.Previous().mut

    acquire e.Next().mut

release e.Next().mut

release e.mut

Perhaps the most obvious issue is that any pair of locks on adjacent elements can be taken in either order by Thread 1; so this cannot possibly be part of a correct lock hierarchy discipline.

Because of the interference of the plug-in code, which does not even have any locks of its own, this code will have a latent deadlock if any other concurrently running thread (including perhaps another instance of Thread 1 itself) can take any two adjacent elements’ locks in any order. The deadlock-proneness is inherent in the design, which fails to guarantee a rigorous ordering of locks. In this respect, the original Example 1 was better, even though its locking was coarse-grained and less friendly to concurrent rendering of different page elements.

Consequences: What Is “Unknown Code”?

It’s one thing to say “avoid calling unknown code while holding a lock” or while inside a similar kind of critical section. It’s another to do it, because there are so many ways to get into “someone else’s code.” Let’s consider a few.

While inside a critical section, including while holding a lock:

  • Avoid calling library functions. A library function is the classic case of “somebody else’s code.” Unless the library function is documented to not take any locks, deadlock problems can arise.
  • Avoid calling plug-ins. Clearly a plug-in is “somebody else’s code.”
  • Avoid calling other callbacks, function pointers, functors, delegates, etc. C function pointers, C++ functors, C# delegates, and the like can also fairly obviously lead to “somebody else’s code.” Sometimes you know that a function pointer, functor, or delegate will always lead to your own code, and calling it is safe; but if you don’t know that for certain, avoid calling it from inside a critical section.
  • Avoid calling virtual methods. This may be less obvious and quite surprising, even Draconian; after all, virtual functions are common and pervasive. But every virtual function is an extensibility point that can lead to executing code that doesn’t even exist today. Unless you control all derived classes (for example, the virtual function is internal to your module and not exposed for others to derive from), or you can somehow enforce or document that overrides must not take locks, deadlock problems can arise if it is called while inside a critical section.
  • Avoid calling generic methods, or methods on a generic type. When writing a C++ template or a Java or .NET generic, we have yet another way to parameterize and intentionally let “other people’s code” be plugged into our own. Remember that anything you call on a generic type or method can be provided by any­one, and unless you know and control all the types with which your template or generic can be instantiated (for example, your template or generic is internal to your module and so cannot be externally instantiated by someone else), avoid calling something generic from within a critical section.

Some of these restrictions may be obvious to you; others may be surprising at first.

Avoidance: Non-Critical Calls

So you want to remove a call to unknown code from a critical section. But how? What can you do? Four options are: (a) move the call out of the critical section if you didn’t need the exclusion anyway; (b) make copies of data inside the critical section and later pass the copies; (c) reduce the granularity or power of the critical section being held at the point of the call; or (d) instruct the callee sternly and hope for the best. Let’s look at each option in turn.

We can apply the first approach directly to Example 2. There is no reason the plugin needs to call browser.CountHiddenElements() while holding its internal lock. That call should simply be moved to before or after the critical section.

The second approach is to avoid the need for taking the lock by avoiding the use of mutable shared data. As shown in Items todo and todo, the two general approaches are: 1. Avoid shared data by passing copies of data, which solves the correctness problem by trading off space and some performance; variants of this approach include passing a subset of the data, and passing the copies via messages to run the callee asynchronously. 2. Avoid mutable data by using immutable snapshots of the state. todo refine this

To improve Example 1, for instance, it might be appropriate to change the RenderElements method to hold the lock only long enough to take copies of the necessary shared information in a local container, then doing processing outside the lock, passing the copied elements. However, this could be inappropriate if the data is very expensive to copy, or the callee needs to work on the real data. Alternatively, perhaps the callee doesn’t really need all the information it gets from being given direct access to the protected object, and it would be both sufficient and efficient to pass copies of just what parts of the data the callee does need.

The third option is to reduce the power or granularity of the critical section, which implicitly trades off ease of use because making your synchronization finer-tuned and/or finer-grained also makes it harder to code correctly. One example of reducing the power of the critical section is to replace a mutex with a reader-writer mutex so that multiple concurrent readers are allowed; if the only deadlocks could arise among threads that are only performing reads of the protected resources, then this can be a valid solution by enabling them to take a read-only lock instead of a read-write lock. And an example of making the critical section finer-grained is to replace a single mutex protecting a large data structure with mutexes protecting parts of the structure; if the only deadlocks could arise among threads that use different parts of the structure, then this can be a valid solution (note that Example 1 is not such a case).

The fourth option is to tell the callee not to block, which trades off enforceability. In particular, if you do enjoy the power to impose requirements on the callee (as  you do with plug-ins to your software, but not with simple calls into existing third-party libraries), then you can require them to not take locks or otherwise perform blocking actions. Alas, these requirements are typically going to be limited to documentation, and are typically not enforceable automatically, but it can often be sufficient in practice to rely on programmer discipline to follow the rule. Tell the callee what (not) to do, and hope he follows the yellow brick road.


Be aware of the many opportunities modern languages give us to call “other people’s code,” and eliminate external opportunities for deadlock by not calling unknown code from within a critical section. If you additionally eliminate internal opportunities for deadlock by applying a lock hierarchy discipline within the code you control, your use of locks will be highly likely to be correct… and then we can move on to making it efficient, which we’ll consider in future articles.


[1] H. Sutter. “The Trouble With Locks” (C/C++ Users Journal, 23(3), March 2005). Available at

[2] Note that calling unknown code from within a critical section is a problem even in single-threaded code, because while our code is inside a critical section we typically have broken invariants. For example, an object may be partway through the process of changing its values from one valid state to another, or we may be partway through a debit-credit transaction where one account has had money removed but the other account has not yet received it. If we call into unknown code while in such a situation, there’s the (usually remote) possibility that the code we call may in turn call other code, which in turn calls other code, which in turn calls into the original module and sees the broken invariant. Techniques like layering can minimize this problem by ensuring code at lower levels can’t “call back up” into higher-level code. The problem is greatly magnified, however, in the presence of concurrency, and we need the same kinds of “layering” tools, only now applied to locks and other ways to express critical sections, to ensure similar problems don’t occur. Unfortunately, those tools only work for code you control, which doesn’t help with modularity.

Finally, here are links to previous Effective Concurrency columns (based on the dates they hit the web, not the magazine print issue dates):
July 2007 The Pillars of Concurrency
August 2007 How Much Scalability Do You Have or Need?
September 2007 Use Critical Sections (Preferably Locks) to Eliminate Races
October 2007 Apply Critical Sections Consistently

Trip Report: October 2007 ISO C++ Standards Meeting

The ISO C++ committee met in Kona on October 1-6. Here’s a quick summary of what we did, and information about upcoming meetings.

What’s in C++0x, and When?

As I’ve blogged before (most recently here and here), the ISO C++ committee is working on the first major revision to the C++ standard, informally called C++0x, which will include a number of welcome changes and enhancements to the language itself as well as to the standard library.

The committee is determined to finish technical work on this standard in 2009. To that end, we plan to publish a feature-complete draft in September 2008, and then spend the next year fine-tuning it to address public feedback and editorial changes.

Note that this represents essentially a one-year slip from the schedule I blogged about at the beginning of the year. Why the slip? There are a few major features that haven’t yet been "checked in" to the working paper and that are long poles for this release. Here are three notable ones:

  • concepts (a way to write constraints for templates that lets us get way better error messages, overloading, and general template usability)
  • advanced concurrency libraries (e.g., thread pools and reader-writer locks)
  • garbage collection

Probably the biggest thing we did at this meeting was to choose time over scope: We decided that we can’t ship C++0x without concepts, but we can and will ship without some or all of the other two.

Concurrency: This was a pivotal meeting for concurrency. We voted a slew of concurrency extensions into the working draft at this meeting, as they happily all got to the ready point at the same time (see below for details):

  • memory model
  • atomics library
  • basic threads, locks, and condition variables

And we decided to essentially stop there; we still plan to add an asynchronous future<T> type in C++0x, but features like thread pools are being deferred until after this standard.

Garbage collection: For C++0x, we’re not going to add explicit support for garbage collection, and only intend to find ways to remove blocking issues like pointer hiding that make it difficult to add garbage collection in a C++ implementation. In particular, the scope of this feature is expected to be constrained as follows:

  • C++0x will include making some uses of disguised pointers undefined, and providing a small set of functions to exempt specific objects from this restriction and to designate pointer-free regions of memory (where these functions would have trivial implementations in a non-collected conforming implementation).
  • C++0x will not include explicit syntax or functions for garbage collection or related features such as finalization. These could well be considered again after C++0x ships.

What we voted into draft C++0x

Here is a list of the main features we voted into the C++0x working draft at this meeting, with links to the relevant papers to read for more information.

nullptr (N2431)

This is an extension from C++/CLI that allows explicitly writing nullptr to designate a null pointer, instead of using the unfortunately-overloaded literal 0 (including its macro spelling of NULL) which makes it hard to distinguish between an null and a zero integer. The proposal was written by myself and Bjarne.

Explicit conversion operators (N2437 and N2434)

You know how in C++ you can make converting constructors only fire when invoked explicitly?

class C {
  C( int );
explicit C( string );

void f( C );

f( 1 ); // ok, implicit conversion from int to C
f( "xyzzy" ); // error, no implicit from string literal to C
f( C("xyzzy") ); // ok, explicit conversion to C

But C++ has two ways to write an implicit conversion — using a one-argument constructor as above to convert "from" some other type, or as a conversion operator "to" some other type as shown below, and we now allow explicit also on this second form:

class C {
  operator int();
explicit operator string(); // <– the new feature

void f( int );
void g( string );

C c;
f( c ); // ok, implicit conversion from C to int
g( c ); // error, no implicit from C to string
g( string(c) ); // ok, explicit conversion to C

Now the feature is symmetric, which is cool. See paper N2434 for how this feature is being used within the C++ standard library itself.

Concurrency memory model (N2429)

As I wrote in "The Free Lunch Is Over", chip designers and compiler writers "are under so much pressure to deliver ever-faster CPUs that they’ll risk changing the meaning of your program, and possibly break it, in order to make it run faster." This only gets worse in the presence of multiple cores and processors.

A memory model is probably of the lowest-level treaty between programmers and would-be optimizers, and fundamental for any higher-level concurrency work. Quoting from my memory model paper: "A memory model describes (a) how memory reads and writes may be executed by a processor relative to their program order, and (b) how writes by one processor may become visible to other processors. Both aspects affect the valid optimizations that can be performed by compilers, physical processors, and caches, and therefore a key role of the memory model is to define the tradeoff between programmability (stronger guarantees for programmers) and performance (greater flexibility for reordering program memory operations)."

Atomic types (N2427)

Closely related to the memory model is the feature of atomic types that are safe to use concurrently without locks. In C++0x, they will be spelled "atomic<T>". Here’s a sample of how to use them for correct (yes, correct!) Double-Checked Locking in the implementation of a singleton Widget:

atomic<Widget*> Widget::pInstance = 0;

Widget* Widget::Instance() {
  if( pInstance == 0 ) { // 1: first check
    lock<mutex> hold( mutW ); // 2: acquire lock (crit sec enter)
    if( pInstance == 0 ) { // 3: second check
      pInstance = new Widget(); // 4: create and assign
  } // 5: release lock
  return pInstance; // 6: return pointer

Threading library (N2447)

You might have noticed that the above example used a lock<mutex>. Those are now in the draft standard too. C++0x now includes support for threads, various flavors of mutexes and locks, and condition variables, along with s
ome other useful concurrency helpers like an efficient and portable std::call_once.

Some other approved features

  • N2170 "Universal Character Names in Literals"
  • N2442 "Raw and Unicode String Literals; Unified Proposal (Rev. 2)"
  • N2439 "Extending move semantics to *this (revised wording)"
  • N2071 "Iostream manipulators for convenient extraction and insertion of struct tm objects"
  • N2401 "Code Conversion Facets for the Standard C++ Library"
  • N2440 "Abandoning a Process"
  • N2436 "Small Allocator Fix-ups"
  • N2408 "Simple Numeric Access Revision 2"

Next Meetings

Here are the next meetings of the ISO C++ standards committee, with links to meeting information where available.

  • February 24 – March 1, 2008: Bellevue, Washington, USA (N2465)
  • June 8-14, 2008: Sophia Antipolis, France
  • September 14-20, 2008: San Francisco Bay area, California, USA (this is the anticipated date)

The meetings are public, and if you’re in the area please feel free to drop by.