What is a high-performance data structure? To answer that question, we’re used to applying normal considerations like Big-Oh complexity, and memory overhead, locality, and traversal order. All of those apply to both sequential and concurrent software.
But in concurrent code, we need to consider two additional things to help us pick a data structure that is also sufficiently concurrency-friendly:
- In parallel code, your performance needs likely include the ability to allow multiple threads to use the data at the same time. If this is (or may become) a high-contention data structure, does it allow for concurrent readers and/or writers in different parts of the data structure at the same time? If the answer is, “No,” then you may be designing an inherent bottleneck into your system and be just asking for lock convoys as threads wait, only one being able to use the data structure at a time.
- On parallel hardware, you may also care about minimizing the cost of memory synchronization. When one thread updates one part of the data structure, how much memory needs to be moved to make the change visible to another thread? If the answer is, “More than just the part that has ostensibly changed,” then again you’re asking for a potential performance penalty, this time due to cache sloshing as more data has to move from the core that performed the update to the core that is reading the result.
It turns out that both of these answers are directly influenced by whether the data structure allows truly localized updates. …
|August 2007||The Pillars of Concurrency|
|September 2007||How Much Scalability Do You Have or Need?|
|October 2007||Use Critical Sections (Preferably Locks) to Eliminate Races|
|November 2007||Apply Critical Sections Consistently|
|December 2007||Avoid Calling Unknown Code While Inside a Critical Section|
|January 2007||Use Lock Hierarchies to Avoid Deadlock|
|February 2008||Break Amdahl’s Law!|
|March 2008||Going Superlinear|
|April 2008||Super Linearity and the Bigger Machine|
|May 2008||Interrupt Politely|
|June 2008||Maximize Locality, Minimize Contention|
|July 2008||Choose Concurrency-Friendly Data Structures|