Daniel Moth’s C++ AMP session is now online

In my keynote on Wednesday, I highlighted just the top two important features in the C++ AMP programming model. That afternoon, my coding colleague and demo demigod Daniel Moth gave a 45-minute session covering the entire C++ AMP programming model that walked through all the features with more examples. Daniel’s talk is now also online at Channel 9. I hope you enjoy it.

Note: The PDF slides link is small but important — the screen isn’t easy to see in the video itself.

C++ AMP keynote is online

Yesterday I had the privilege of talking about some of the work we’ve been doing to support massive parallelism on GPUs in the next version of Visual C++. The video of my talk announcing C++ AMP is now available on Channel 9. (Update: Here’s an alternate link; it seems to be posted twice.)

The first 20 minutes has nothing to do with C++ in particular or any platform in particular, but tries to make the case that the right way to view the “trends” of multicore computing, GPU computing, and cloud computing (HaaS) is that they are not three trends at all, but merely facets of the same single trend — heterogeneous parallel computing.

If they are, then one programming model should be able to address them all. We think we’ve found one.

The main reasons we decided to build a new model is that we believe there needs to be a single model that has all of the following attributes:

  • C++, not C: It should leverage C++’s power for strong abstraction without sacrificing performance, not just be a dialect of C.
  • Mainstream: It should be programmable by millions of developers, not just by a priesthood. Litmus test: Is the Hello World parallel GPU program a page and half, or a couple of lines?
  • Minimal: It adds just one general-purpose language extension that addresses not only the immediate problem (dealing with cores that can’t support full C++) but many others. With the right general-purpose extension, the rest can be done as just a library.
  • Portable: It allows shipping a single EXE that can use any combination of GPU vendors’ hardware. The initial implementation uses DirectCompute and supports all devices that are DX11 capable; DirectCompute is just an implementation detail of the first release, and the model can (and I expect will) be implemented to directly talk to any interesting hardware.
  • General and future-proof: The initial release will focus on GPU computing, but it’s intended to enable people to write code for the GPU in a way that in the future we can recompile with few or no changes to spread across any and all accessible compute cores, including ones in the cloud.
  • Open: I mentioned that Microsoft intends to make the C++ AMP specification open, and encourages its implementation on other C++ compilers for any hardware or OS target. AMD announced that they will implement C++ AMP in their FSA reference compiler. NVidia also announced support.

We’re really excited about this, and I hope you find the information in the talk to be useful. A prerelease implementation in Visual C++ that runs on Windows will be available later this year. More to come…

AFDS Keynote Live Stream

Just a reminder for those interested in using C++ to harness GPUs for fast code: My keynote at AMD Fusion Developer’s Conference will be webcast live. I’ll post another link when the recorded talk is available for on-demand viewing.

The talk starts at 8:30am U.S. Pacific time tomorrow (Wed June 15).

Today Jem Davies of ARM also gave a keynote. He’s a great speaker with a great message; look for it when it becomes available on demand. Recommended viewing whether or not you target ARM processors.