Effective Concurrency: Maximize Locality, Minimize Contention

The latest Effective Concurrency column, “Maximize Locality, Minimize Contention”, just went live on DDJ’s site, and also appears in the print magazine. From the article:

ec11-fig2Want to kill your parallel application’s scalability? Easy: Just add a dash of contention.

Locality is no longer just about fitting well into cache and RAM, but also about avoiding scalability busters by keeping tightly coupled data physically close together and separately used data far, far apart. …

I hope you enjoy it.
 
Finally, here are links to previous Effective Concurrency columns (based on the dates they hit the web, not the magazine print issue dates):
July 2007 The Pillars of Concurrency
August 2007 How Much Scalability Do You Have or Need?
September 2007 Use Critical Sections (Preferably Locks) to Eliminate Races
October 2007 Apply Critical Sections Consistently
November 2007 Avoid Calling Unknown Code While Inside a Critical Section
December 2007 Use Lock Hierarchies to Avoid Deadlock
January 2008 Break Amdahl’s Law!
February 2008 Going Superlinear
March 2008 Super Linearity and the Bigger Machine
April 2008 Interrupt Politely
May 2008 Maximize Locality, Minimize Contention

Part 2 of concurrency interview with DevX

Part 2 of DevX’s interview with me about concurrency just went live on the web. From the article’s blurb:

What does the future hold for concurrency? What will happen to the tools and techniques around concurrent programming? In part two of our series, concurrency guru Herb Sutter talks about these issues and what developers need to be reading to understand concurrency. 

… In this final installment he looks into his crystal ball with an eye towards the future and gives developers hints for the resources they need to be better concurrent programmers.

This part touches on a variety of topics, from right-now items like delivering parallelism internally inside libraries to shield the programmer from knowing about concurrency and where to look for further reading, to future topics like transactional memory and upcoming homogeneous vs. heterogeneous manycore CPUs. I hope you enjoy it.

(March’s part 1 is here.)