There are two main ways to achieve superlinear scalability, or to use P processors to compute an answer more than P times faster…:
- Do disproportionately less work.
- Harness disproportionately more resources.
Last month, we focused on the first point by illustrating parallel search and how it naturally achieves superlinear speedups when matches are not distributed evenly because some workers get "rich" subranges and will find a match faster, which benefits the whole search because we can stop as soon as any worker finds a match.
This month, we’ll conclude examining the first point with a few more examples, and then consider how to achieve superlinear speedups by harnessing more resources—quite literally, running on a bigger machine without any change in the hardware. …
July 2007 | The Pillars of Concurrency |
August 2007 | How Much Scalability Do You Have or Need? |
September 2007 | Use Critical Sections (Preferably Locks) to Eliminate Races |
October 2007 | Apply Critical Sections Consistently |
November 2007 | Avoid Calling Unknown Code While Inside a Critical Section |
December 2007 | Use Lock Hierarchies to Avoid Deadlock |
January 2008 | Break Amdahl’s Law! |
February 2008 | Going Superlinear |
March 2008 | Super Linearity and the Bigger Machine |