Ratzenberger on the Manual Arts

When someone I’ve just met asks me what I do for a living and I say I work in software, I sometimes hear wistful responses like, "Oh, that’s cool and it must be a lot of fun. All I do is…" followed by carpentry, plumbing, teaching, farming, or another occupation or trade.

To me, that’s backwards: Any of those things is at least as important, and at least as worthy of respect and appreciation, as what we technologists do. I usually respond by saying so, and adding: "When the power goes out, I’m pretty much useless; worse still, everything I’ve ever made stops working. What you make still keeps working when the power is out."

I can make a pretty good argument that the non-technology skills are more valuable/applicable, but the more important thing is that all skills are deserving of respect and appreciation. Technology has done wonders, and it’s super fun to be involved in that and to have our advances and skills recognized; after all, software has helped build the tools we use to make other real things, including cars and buildings and spacecraft. But it’s unfortunate when sometimes people who focus too heavily on the so-called "higher technology" skills forget, or devalue, the people and skills that built their house, their couch, and their car. Nearly everyone has skills and personal value worth appreciating.

John Ratzenberger makes a similar point:

"The manual arts have always taken precedence over the fine arts. I realized that there is no exception to that rule. You can’t name a fine art that isn’t dependent on a manual art. Someone’s got to build a guitar before Bruce Springsteen can go to work. Someone had to build a ceiling before Michelangelo could go to work."

We live in a richly technologically enabled society, which we can and should enjoy. But when a natural disaster (or just a programming glitch) strikes, and we’re suddenly without power and Nothing Works Any More, we realize how fragile our comfort infrastructure can be.

A couple of months ago here in the northwest United States and western Canada, we had a windstorm that left 1.5 million people without power. Our house was dark and cold for just three nights; many of our friends were out for a week. It can be startling to find oneself unable to talk to anyone without physically going to them: Our cell phones didn’t work at our house, because many cell towers had no power. Many people were chagrined to discover that their landline telephones didn’t work either, because even though the phone lines were fine, the telephone (or base station for a cordless phone) that most people attach requires separate power. Skype wasn’t an option, needless to say, even while the laptop batteries held out. Thus cut off, we were reduced to walking, or driving where trees didn’t block the roads and if you could find a gas station whose pump was working (those pumps usually need electricity too).

Interestingly, the home phone would have been fine had we been using a retro 1950’s-era handset. We’ve now purchased one to keep around the house for next time. Sometimes simpler is better, even if you can’t see the caller ID.

After the storm, who was it who restored our comfort infrastructure, removed fallen trees, and repaired broken houses and fences? Primarily, it wasn’t us technology nerds — it was the electricians, the carpenters and the plumbers. How we do appreciate them! Fortunately, those people in return also appreciate the software, smartphones, PDAs, and other wonders that our industry produces that help them in their own work and leisure, which makes us feel good about being able to contribute something useful back, and so we all get to live in a mutual admiration society.

Thanks for the thought, John. Oh, and Cheers!

Migrant Technology Workers

Today, Slashdot is running an article on immigration. The discussion thread has some interesting notes (though, alas, the signal-to-noise seems to be noticeably lower than usual when reading at +5 — this seems to be quite a politically charged topic).

It reminds me of something that happened two years ago at the ACCU conference. I was on a panel that seemed innocuous enough, until one of the questions raised was immigration: ‘Is it a Good Thing or a Bad Thing that people come from other countries in order to do high-tech work here?’ I’d been peripherally aware that this debate was going on in America, was interested to observe that it was also going on in the UK, and was quite surprised at what a hot button it had become. There sure was a lot of discussion with fervent opinions in both directions.

I don’t have an opinion either way on the issue, but I just thought I’d share an interesting (I hope) observation about perspective, as someone who is from Canada and now lives and works in the United States and thinks well of both places and the people in them: It’s interesting to me that in America (as in Canada), I have seen concerns like this about immigration and people coming from elsewhere to perform domestic professional jobs. I can understand the feelings behind those concerns. On the other hand, during the 35 years I lived in Canada I saw equally frequent and vocal concerns about emigration and the "brain drain" of professionals leaving Canada for the United States — notably doctors and high-tech folks, but we did lose a lot of actors too. And Bill Shatner. (Just kidding; I like Shatner.)

Isn’t it interesting that when a skilled person moves, some people in the country they’re joining are worried that they’re arriving, and some people in the country that they’re leaving are equally worried that they’re going?

Just a thought. Back on Slashdot, though, my favorite comment was:

"Mr Gates did mention that 640K skilled immigrants ought to be enough for USA."

Maybe till January 18/19, 2038. :-)

Welcome to Silicon Miami: The System-On-a-Chip Evolution

A lot of people seem to have opinions about whether hardware trends are generally moving things on-chip or off-chip. I just saw another discussion about this on Slashdot today. Here’s part of the summary of that article:

"In the near future the Central Processing Unit (CPU) will not be as central anymore. AMD has announced the Torrenza platform that revives the concept op [sic] co-processors. Intel is also taking steps in this direction with the announcement of the CSI. With these technologies in the future we can put special chips (GPU’s, APU’s, etc. etc.) directly on the motherboard in a special socket. Hardware.Info has published a clear introduction to AMD Torrenza and Intel CSI and sneak peaks [sic] into the future of processors."

Sloppy spelling aside (and, sigh, a good example of why not to live on on spell-check alone), is this a real trend?

Of course it is. But the exact reverse trend is also real, and I happen to think the reverse trend is more likely to dominate in the medium term. I’ll briefly explain why, and support why I think the above is highlighting the wrong trend and making the wrong prediction.

Two Trends, Both Repeating Throughout (Computer) History

Those who’ve been watching, or simply using, CPUs for years have probably seen both of the following apposite [NB, this spelling is intentional] trends, sometimes at the same time for different hardware functions:

  • Stuff moves off the CPU. For example, first the graphics are handled by the CPU; then they’re moved off to a separate GPU for better efficiency.
  • Stuff moves onto the CPU. For example, first the FPU is a coprocessor; then it’s moved onto the CPU for better efficiency.

The truth is, the wheel turns. It can turn in different directions at the same time for different parts of the hardware. Just because we’re happening to look at a "move off the chip" moment for one set of components does not a trend make.

Consider why things move on or off the CPU:

  • When the CPU is already pretty busy much of the time and doesn’t have much spare capacity, people start making noises about moving this or that off "for better efficiency," and they’re right.
  • When the CPU is already pretty idle most of the time, or system cost is an issue, people start making the reverse noises "for better efficiency," and they’re right. (Indeed, if you read the Woz interview that I blogged about recently, you’ll notice how he repeatedly emphasizes his wonderful adventures in the art of the latter — namely, doing more with fewer chips. It led directly to the success of the personal computer, years before it would otherwise likely have happened. Thanks, Woz.)

Add to the mix that general-purpose CPUs by definition can’t be as efficient as special-purpose chips, even when they can do comparable work, and we can better appreciate the balanced forces in play and how they can tip one way or another at different times and for different hardware features.

What’s New or Different Now?

So now mix in the current sea change away from ever-faster uniprocessors and toward processors with many, but not as remarkably faster, cores. Will this sway the long-term trend toward on-processor designs or toward co-processor designs?

The first thing that might occur to us is that there’s still a balance of forces. Specifically, we might consider these effects that I mentioned in the Free Lunch paper:

  • On the one hand, this is a force in favor of coprocessors, thus moving work off the CPU. A single core isn’t getting faster the way it used to, and we software folks are gluttons for CPU cycles and are always asking the hardware to do more stuff; after all, we hardly ever remove software features. Therefore for many programs CPU cycles are more dear, so we’ll want to use them for the program’s code as much as we can instead of frittering them away on other work. (This reasoning applies mainly to single-threaded programs and non-scaleable multi-threaded programs, of course.)
  • On the other hand, this is also a force against coprocessors, for moving work onto the CPU. We’re now getting a bunch (and soon many bunches) of cores, not just one. Until software gets its act together and we start seeing more mainstream manycore-exploiting applications, we’re going to be enjoying a minor embarrassment of riches in terms of spare CPU capacity, and presumably we’ll be happy using those otherwise idle cores to do work that expensive secondary chips might otherwise do. At least until we have applications ready to soak up all those cycles.

So are the forces still in balance, as they have ever been? Are we just going see more on-the-chip / off-the-chip cycles?

In part yes, but the above analysis is looking more at symptoms than at causes — the reasons why things are happening. The real point is more fundamental, and at the heart of why the free lunch is over:

  • On the gripping hand, the fundamental reason why we’re getting so many cores on a chip is because CPU designers don’t know what to do with all those transistors. Moore’s Law is still happily handing out a doubling of transistors per chip every 18 months or so (and will keep doing that for probably at least another decade, thank you, despite recurring ‘Moore’s Law is dead!’ discussion threads on popular forums). That’s the main reason why we’re getting multicore parts: About five years ago, commodity CPU designers pretty much finished mining the "make the chip more complex to run single-threaded code faster" path that they had been mining to good effect for 30 years (there will be more gains there, but more incremental than exponential), and so we’re on the road to manycore instead.

But we’re also on the road to doing other things with all those transistors, besides just manycore. After all, manycore isn’t the only, or necessarily the best, use for all those gates. Now, I said "all" deliberately: To be sure you don’t get me wrong, let me emphasize that manycore is a wonderful new world and a great use for many of those transistors and we should be eagerly excited about that; it’s just not the only or best use for all of those transistors.

What Will Dominate Over the Next Decade? More On-CPU Than Off-CPU

It’s no coincidence that companies like AMD are buying companies like ATI. I’m certainly not going out on much of a limb to predict the following:

  • Of course we’ll see some GPUs move on-chip. It’s a great way to soak up transistors and increase bandwidth between the CPU and GPU. Knowing how long CPU design/production pipelines are, don’t expect to see this in earnest for about 3-5 years. But do expect to see it.
  • Of course we’ll see some NICs move on-chip. It’s a great way to soak up transistors and increase bandwidth between the CPU and NIC.
  • Of course we’ll see some [crypto, security checking, etc., and probably waffle-toasting, and shirt ironing] work move on-chip.

Think "system on a chip" (SoC). By the way, I’m not claiming to make any earth-shattering observation here. All of this is based on public information and/or fairly obvious inference, and I’m sure it has been pointed out by others. Much of it already appears on various CPU vendors’ official roadmaps.

There are just too many transistors available, and located too conveniently close to the CPU cores, to not want to take advantage of them. Just think of it in real estate terms: It’s all about "location, location, location." And when you have a low-rent location (those transistors are keep getting cheaper) in prime beachfront property (on-chip), of course there’ll be a mad rush to buy up the property and a construction boom to build high-rises on the beachfront (think silicon Miami) until the property values reach supply-demand equilibrium again (we get to balanced SoC chips that evenly spend those enormous transistor budgets, the same way we’ve already reached balanced traditional systems). It’s a bit like predicting that rain will fall downward. And it doesn’t really matter whether we think skyscrapers on the beach are aesthetically pleasing or not.

Yes, the on-chip/off-chip wheel will definitely keep turning. Don’t quote this five years from now and say it was wrong by pointing at some new coprocessor where some work moved off-chip; of course that will happen too. And so will the reverse. That both of those trends will continue isn’t really news, at least not to anyone who’s been working with computers for the past couple of decades. It’s just part of the normal let’s-build-a-balanced-system design cycle as software demands evolve and different hardware parts progress at different speeds.

The news lies in the balance between the trends: The one by far most likely to dominate over the next decade will be for now-separate parts to move onto the CPU, not away from it. Pundit commentary notwithstanding, the real estate is just too cheap. Miami, here we come.