This month’s Effective Concurrency column, Associate Mutexes with Data to Prevent Races”, is now live on DDJ’s website.
From the article:
Come together: Associate mutexes with the data they protect, and you can make your code race-free by construction
Race conditions are one of the worst plagues of concurrent code: They can cause disastrous effects all the way up to undefined behavior and random code execution, yet they’re hard to discover reliably during testing, hard to reproduce when they do occur, and the icing on the cake is that we have immature and inadequate race detection and prevention tool support available today.
The holy grail of the Consistency pillar is to make a concurrent program race-free and deadlock-free by construction. … This article shows how to achieve the "race-free by construction" grail via sound engineering, with automatic and deterministic race identification at test time based on code coverage alone.
I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns:
1 The Pillars of Concurrency (Aug 2007)
2 How Much Scalability Do You Have or Need? (Sep 2007)
3 Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)
4 Apply Critical Sections Consistently (Nov 2007)
5 Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)
6 Use Lock Hierarchies to Avoid Deadlock (Jan 2008)
7 Break Amdahl’s Law! (Feb 2008)
8 Going Superlinear (Mar 2008)
9 Super Linearity and the Bigger Machine (Apr 2008)
10 Interrupt Politely (May 2008)
11 Maximize Locality, Minimize Contention (Jun 2008)
12 Choose Concurrency-Friendly Data Structures (Jul 2008)
13 The Many Faces of Deadlock (Aug 2008)
14 Lock-Free Code: A False Sense of Security (Sep 2008)
15 Writing Lock-Free Code: A Corrected Queue (Oct 2008)
16 Writing a Generalized Concurrent Queue (Nov 2008)
17 Understanding Parallel Performance (Dec 2008)
18 Measuring Parallel Performance: Optimizing a Concurrent Queue (Jan 2009)
19 volatile vs. volatile (Feb 2009)
20 Sharing Is the Root of All Contention (Mar 2009)
21 Use Threads Correctly = Isolation + Asynchronous Messages (Apr 2009)
22 Use Thread Pools Correctly: Keep Tasks Short and Nonblocking (Apr 2009)
23 Eliminate False Sharing (May 2009)
24 Break Up and Interleave Work to Keep Threads Responsive (Jun 2009)
25 The Power of “In Progress” (Jul 2009)
26 Design for Manycore Systems (Aug 2009)
27 Avoid Exposing Concurrency – Hide It Inside Synchronous Methods (Oct 2009)
28 Prefer structured lifetimes – local, nested, bounded, deterministic (Nov 2009)
29 Prefer Futures to Baked-In “Async APIs” (Jan 2010)
30 Associate Mutexes with Data to Prevent Races (May 2010)
10 thoughts on “Effective Concurrency: Associate Mutexes with Data to Prevent Races”
Yes, that’s a useful technique too. The tradeoff is that you then need a special lock type; in this design I decided to illustrate the technique in a way that hooks into the C++0x standard locks so that you can write things like lock_guard<Group1>; (see my reply to a later comment asking about this).
David is spot-on that you want mutexes to cover invariants, not just individual pieces of data. How to design the groups of data that should be covered by the same mutex (or, which mutexes protect which data) is the subject of another article. This article focuses only on the question of how to ensure that you then used the mutexes correctly.
Re map: Yes, thank the Lords of Templates. For types like that, you write a typedef to get rid of the comma. Fun stuff.
Re lock_guard: Actually that is correct and slickly intended. The reason I wrote the group to provide lock/try_lock/unlock was specifically to make it satisfy the C++0x mutex concept, so that you can use the group itself with all the C++0x locking machinery including lock_guard<Group1>. Pretty snazzy, huh?
Right, that is the other (orthogonal) issue. As mentioned in the intro, this article focuses on eliminating races by associating data with locks; the earlier article focuses on exactly the issue eliminating deadlocks by using lock ordering to ensure locks are taken in a consistent order (typically, but not only, by using lock hierarchies).
Both parts are necessary, and the techniques compose well.
In addition to races and deadlocks, there’s actually a third issue: Composability. I intend to cover that one in a future article, possibly next month’s.
The PROTECTED_MEMBER macro won’t work for types like map < int,int > due to the comma. I see that you have put parenthesis around it but (map < int,int > ) isn’t a type, is it?
Also, lock_guard < Group1 > doesn’t seem correct. Group1 is not a mutex.
One very important topic (from the “Use Lock Hierarchies” article) that this doesn’t cover is lock ordering — if one thread calls:
and another thread calls:
These will be “correct” in that the testcase won’t complain, but the two threads will deadlock. Sometimes.
The previous comment is spot-on. Using mutexes to protect data is a good start, but it doesn’t solve all problems. For instance, to use your ‘transfer-element-between-tables’ example, this code:
// lock mutTable1
table1.erase( x );
// unlock mutTable1
// lock mutTable2
table2.insert( x );
// unlock mutTable2
would pass your ‘protect-the-data’ test. It however fails the invariant that x is in either table1 or table2. For the purposes of your program, this may not be a problem. But it may be, in which case you need some lock that protects *both* table1 and table2. But a better way to think about it is that your mutex would guarantee that any transfer between table1 and table2 was atomic…no observer could ever see that x was neither in table1 nor table2.
There is a lot of research in Java about making more general locks…for instance, the @GuardedBy annotation associates locks with data (essentially your approach). See this CERT secure coding rule in Java for more information about using annotations (along with analysis tools) to eliminate race conditions:
Rather than thinking of a mutex as protecting data, I prefer to think of them as protectors of invariants. Lock hierarchies then fall out naturally for invariants involving other invariants naturally. I suspect this distinction would also help in deciding which objects should go in to which ‘MyData’ class.
I saw something similar on codeproject once, actually (can’t find the link now). The approach taken then was to have something like a ‘guardian’ class that holds a private MyData object and a mutex. To access the MyData object inside, you had to create a special kind of lock whose constructor took a reference to the guardian object. While the lock exists, the guardian’s mutex would acquired as usual but the lock provided a get() method and overloaded the ‘arrow operator’ to return a pointer to the MyData inside the guardian. In this way, you fundamentally could not get to the MyData object without holding the lock. This approach is even more race-free-by-construction as even the asserts aren’t needed!
I could sketch this out in code, by I fear the commenting system will eat my C++ templates :)
“// use table1 and table2 (both are shared)
table1.erase( x );
table2.insert( x );
Quick: What locks should you take to make this code correct?”
I know that the examples given in the EC series should be seen as illustrative and I don’t want to detract from what the article is trying to teach, but I’d just like to give one answer that I think is relevant for a completely different strategy: Lock x.
One way of getting concurrency in this case is to have a function
that maps keys to any number of locks, and then lock mutex[lock_number(x)].
Then we know that relevant parts of both table1 and table2 are locked for the duration of the transaction. This assumes that the internals of table1 and table2 are written in such ways that they, too, are split according to the lock_number function.
Comments are closed.