This month’s Effective Concurrency column, “volatile vs. volatile”, is now live on DDJ’s website and also appears in the print magazine. (As a historical note, it’s DDJ’s final print issue, as I mentioned previously.)
This article aims to answer the frequently asked question: “What does volatile mean?” The short answer: “It depends, do you mean Java/.NET volatile or C/C++ volatile?” From the article:
What does the volatile keyword mean? How should you use it? Confusingly, there are two common answers, because depending on the language you use volatile supports one or the other of two different programming techniques: lock-free programming, and dealing with ‘unusual’ memory.
Adding to the confusion, these two different uses have overlapping requirements and impose overlapping restrictions, which makes them appear more similar than they are. Let’s define and understand them clearly, and see how to spell them correctly in C, C++, Java and C# — and not always as volatile. …
I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns and the DDJ print magazine issue in which they first appeared:
The Pillars of Concurrency (Aug 2007)
How Much Scalability Do You Have or Need? (Sep 2007)
Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)
Apply Critical Sections Consistently (Nov 2007)
Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)
Use Lock Hierarchies to Avoid Deadlock (Jan 2008)
Break Amdahl’s Law! (Feb 2008)
Going Superlinear (Mar 2008)
Super Linearity and the Bigger Machine (Apr 2008)
Interrupt Politely (May 2008)
Maximize Locality, Minimize Contention (Jun 2008)
Choose Concurrency-Friendly Data Structures (Jul 2008)
The Many Faces of Deadlock (Aug 2008)
Lock-Free Code: A False Sense of Security (Sep 2008)
Writing Lock-Free Code: A Corrected Queue (Oct 2008)
Writing a Generalized Concurrent Queue (Nov 2008)
Understanding Parallel Performance (Dec 2008)
Measuring Parallel Performance: Optimizing a Concurrent Queue (Jan 2009)
volatile vs. volatile (Feb 2009)
12 thoughts on “Effective Concurrency: volatile vs. volatile”
@Yang: That’s not quite true. In .NET today, ‘volatile’ operations are sequentially consistent (SC, fully ordered) except only that a volatile store followed by a volatile load can be reordered. This is true on both x86/x64 and Itanium. This doesn’t affect most uses of volatile, although Joe Duffy correctly explains in that article and in his blog that there are situations where it can change the expected meaning of code. I consider this a weakness, and am working with the CLR team to make .NET volatiles be SC in a future release post-VS2010 as part of my Prism memory model work. Note that the above does not apply to Java volatiles, which have been guaranteed to be SC since JSR-133 / Java 5.
Your article states that for .NET, `volatile` operations cannot be reordered (implies a memory barrier) . However, according to the following article (and various other resources I’ve encountered on the web):
operations are allowed be reordered. Any memory barrier must be explicitly created.
I wonder why there is so much “volatile” peppered all over the atomic functions on Windows and Mac OS X (the GCC built-ins do not, as I would expect)?
A variable declared C++0x “volatile atomic” will have the union of the guarantees (and restrictions) of “C/C++ volatile” andstd:: “atomic”. It will be suitable for both inter-thread communication and communication with threads or hardware that don’t follow the memory model. It’s unusual to need that in practice, as you usually use one control variable to synchronize among your own threads, where atomic is the right tool, and another kind of variable to talk to hardware or otherwise unusual memory, where volatile is the right tool. Still, they’re orthogonal concepts and if you do want both C++0x will let you express it.
Hm, seem as if I have to type “<” when I need a “less than” character. So I’ll try again:
“What would be the difference to “atomic <volatile T>”?”
Sorry, hit the submit button too early. The question should read:
“What would be the difference to “atomic”?”
in your article you mention “volatile atomic” as a way to make a variable of being both, of unusual type (volatile) and to guarantee its atomicity.
What would be the difference to “atomic”?
“volatile” doesn’t require hardware registers or address multiplexing to be necessary. Common APIs like shared memory between processes can fit the bill as well.
Will you be commenting on the multicores advantages stop at about 8 study from US Gov. Sandia and Oak Ridge labs:
Nick: Yes, the issues apply in principle to uniprocessor systems too. Think of it this way: Any level of the execution chain, from compiler to processor to cache (to future transactional memory in SW or HW to anything else) can transform reads and writes and must play nice with the memory model. On a uniproc system, you eliminate only processor and probably cache from that list, but the remaining parts (e.g., the compiler) can still perform unwanted transformations.
Roger: Yes, that’s another valid example. The list wasn’t meant to be exhaustive, just to give a few common examples to motivate why this is a legitimate issue. Another one I mentioned in the table but not in the text is setjmp-safety; there are more.
Re C++ style, volatile.
When I was doing a lot of hardware engineering once of the classic hardware semantics was reading from a register to trigger an event.
You don’t mention this in your classic examples list , is this because it is not safe to assume the compiler won’t add additional reads of a volatile register to the code?
The articles on Lock-free programming have been immensely useful even from the lock based concurrency perspective.
I have a question hoping you have some insight. Do these issues you’ve described with lock-free programming exist on uniprocessor systems?
In particular I am interested problems arising from processor or cache memory reordering. The assembly can always be inspected to verify that compiler hasn’t produced non-thread-safe code.
Comments are closed.