A lot of people seem to have opinions about whether hardware trends are generally moving things on-chip or off-chip. I just saw another discussion about this on Slashdot today. Here’s part of the summary of that article:
"In the near future the Central Processing Unit (CPU) will not be as central anymore. AMD has announced the Torrenza platform that revives the concept op [sic] co-processors. Intel is also taking steps in this direction with the announcement of the CSI. With these technologies in the future we can put special chips (GPU’s, APU’s, etc. etc.) directly on the motherboard in a special socket. Hardware.Info has published a clear introduction to AMD Torrenza and Intel CSI and sneak peaks [sic] into the future of processors."
Sloppy spelling aside (and, sigh, a good example of why not to live on on spell-check alone), is this a real trend?
Of course it is. But the exact reverse trend is also real, and I happen to think the reverse trend is more likely to dominate in the medium term. I’ll briefly explain why, and support why I think the above is highlighting the wrong trend and making the wrong prediction.
Two Trends, Both Repeating Throughout (Computer) History
Those who’ve been watching, or simply using, CPUs for years have probably seen both of the following apposite [NB, this spelling is intentional] trends, sometimes at the same time for different hardware functions:
- Stuff moves off the CPU. For example, first the graphics are handled by the CPU; then they’re moved off to a separate GPU for better efficiency.
- Stuff moves onto the CPU. For example, first the FPU is a coprocessor; then it’s moved onto the CPU for better efficiency.
The truth is, the wheel turns. It can turn in different directions at the same time for different parts of the hardware. Just because we’re happening to look at a "move off the chip" moment for one set of components does not a trend make.
Consider why things move on or off the CPU:
- When the CPU is already pretty busy much of the time and doesn’t have much spare capacity, people start making noises about moving this or that off "for better efficiency," and they’re right.
- When the CPU is already pretty idle most of the time, or system cost is an issue, people start making the reverse noises "for better efficiency," and they’re right. (Indeed, if you read the Woz interview that I blogged about recently, you’ll notice how he repeatedly emphasizes his wonderful adventures in the art of the latter — namely, doing more with fewer chips. It led directly to the success of the personal computer, years before it would otherwise likely have happened. Thanks, Woz.)
Add to the mix that general-purpose CPUs by definition can’t be as efficient as special-purpose chips, even when they can do comparable work, and we can better appreciate the balanced forces in play and how they can tip one way or another at different times and for different hardware features.
What’s New or Different Now?
So now mix in the current sea change away from ever-faster uniprocessors and toward processors with many, but not as remarkably faster, cores. Will this sway the long-term trend toward on-processor designs or toward co-processor designs?
The first thing that might occur to us is that there’s still a balance of forces. Specifically, we might consider these effects that I mentioned in the Free Lunch paper:
- On the one hand, this is a force in favor of coprocessors, thus moving work off the CPU. A single core isn’t getting faster the way it used to, and we software folks are gluttons for CPU cycles and are always asking the hardware to do more stuff; after all, we hardly ever remove software features. Therefore for many programs CPU cycles are more dear, so we’ll want to use them for the program’s code as much as we can instead of frittering them away on other work. (This reasoning applies mainly to single-threaded programs and non-scaleable multi-threaded programs, of course.)
- On the other hand, this is also a force against coprocessors, for moving work onto the CPU. We’re now getting a bunch (and soon many bunches) of cores, not just one. Until software gets its act together and we start seeing more mainstream manycore-exploiting applications, we’re going to be enjoying a minor embarrassment of riches in terms of spare CPU capacity, and presumably we’ll be happy using those otherwise idle cores to do work that expensive secondary chips might otherwise do. At least until we have applications ready to soak up all those cycles.
So are the forces still in balance, as they have ever been? Are we just going see more on-the-chip / off-the-chip cycles?
In part yes, but the above analysis is looking more at symptoms than at causes — the reasons why things are happening. The real point is more fundamental, and at the heart of why the free lunch is over:
- On the gripping hand, the fundamental reason why we’re getting so many cores on a chip is because CPU designers don’t know what to do with all those transistors. Moore’s Law is still happily handing out a doubling of transistors per chip every 18 months or so (and will keep doing that for probably at least another decade, thank you, despite recurring ‘Moore’s Law is dead!’ discussion threads on popular forums). That’s the main reason why we’re getting multicore parts: About five years ago, commodity CPU designers pretty much finished mining the "make the chip more complex to run single-threaded code faster" path that they had been mining to good effect for 30 years (there will be more gains there, but more incremental than exponential), and so we’re on the road to manycore instead.
But we’re also on the road to doing other things with all those transistors, besides just manycore. After all, manycore isn’t the only, or necessarily the best, use for all those gates. Now, I said "all" deliberately: To be sure you don’t get me wrong, let me emphasize that manycore is a wonderful new world and a great use for many of those transistors and we should be eagerly excited about that; it’s just not the only or best use for all of those transistors.
What Will Dominate Over the Next Decade? More On-CPU Than Off-CPU
It’s no coincidence that companies like AMD are buying companies like ATI. I’m certainly not going out on much of a limb to predict the following:
- Of course we’ll see some GPUs move on-chip. It’s a great way to soak up transistors and increase bandwidth between the CPU and GPU. Knowing how long CPU design/production pipelines are, don’t expect to see this in earnest for about 3-5 years. But do expect to see it.
- Of course we’ll see some NICs move on-chip. It’s a great way to soak up transistors and increase bandwidth between the CPU and NIC.
- Of course we’ll see some [crypto, security checking, etc., and probably waffle-toasting, and shirt ironing] work move on-chip.
Think "system on a chip" (SoC). By the way, I’m not claiming to make any earth-shattering observation here. All of this is based on public information and/or fairly obvious inference, and I’m sure it has been pointed out by others. Much of it already appears on various CPU vendors’ official roadmaps.
There are just too many transistors available, and located too conveniently close to the CPU cores, to not want to take advantage of them. Just think of it in real estate terms: It’s all about "location, location, location." And when you have a low-rent location (those transistors are keep getting cheaper) in prime beachfront property (on-chip), of course there’ll be a mad rush to buy up the property and a construction boom to build high-rises on the beachfront (think silicon Miami) until the property values reach supply-demand equilibrium again (we get to balanced SoC chips that evenly spend those enormous transistor budgets, the same way we’ve already reached balanced traditional systems). It’s a bit like predicting that rain will fall downward. And it doesn’t really matter whether we think skyscrapers on the beach are aesthetically pleasing or not.
Yes, the on-chip/off-chip wheel will definitely keep turning. Don’t quote this five years from now and say it was wrong by pointing at some new coprocessor where some work moved off-chip; of course that will happen too. And so will the reverse. That both of those trends will continue isn’t really news, at least not to anyone who’s been working with computers for the past couple of decades. It’s just part of the normal let’s-build-a-balanced-system design cycle as software demands evolve and different hardware parts progress at different speeds.
The news lies in the balance between the trends: The one by far most likely to dominate over the next decade will be for now-separate parts to move onto the CPU, not away from it. Pundit commentary notwithstanding, the real estate is just too cheap. Miami, here we come.
One thought on “Welcome to Silicon Miami: The System-On-a-Chip Evolution”
Comments are closed.