The second Effective Concurrency column in DDJ just went live. It’s titled "How Much Scalability Do You Have or Need?" and makes the case that there’s more than just one important category of throughput scalability, and one size does not fit all. From the article:
In your application, how many independent pieces of work are ready to run at any given time? Put another way, how many cores (or hardware threads, or nodes) can your code harness to get its answers faster? And when should the answers to these questions not be "as many as possible"?
I hope you enjoy it.
Next month’s article is already in post-production. It will be titled "Use Critical Regions (Preferably Locks) to Eliminate Races" and will hit the web about a month from now. One of the early questions it answers is, How bad can a race be? There’s a hint in the article’s tagline: "In a race no one can hear you scream…"
Finally, here are some links to last month’s Effective Concurrency column and to a prior locking article of interest that provides a nice background motivation for the next few EC articles to come starting next month:
- July 2007: The Pillars of Concurrency
- March 2005: The Trouble With Locks
I’m looking forward to next month’s read. We really can’t get enough material on safe, yet effective, multi-threading these days. It’s obviously been ignored for far too long, so that series is great.
I’ve recently approached concurrency in a .NET-inspired way — with similarities to InvokeRequired / BeginInvoke — in native C++. I won’t go as far as to claim my library as effective in terms of speed as a plain mutex / c-section approach, but it does make the implementation quite painless. Basically it offers synchronous (blocking) and asynchronous (non-blocking) calls across thread boundaries, with exception re-throwing and return value passing, and that works quite nicely.