An implementation of generic lambdas is now available

For those interested in C++ standardization and not already following along at isocpp.org, here’s an item of likely interest:

An implementation of generic lambdas (request for feedback)—Faisal Vali

This week, Faisal Vali shared an initial “alpha” implementation of generic lambdas in Clang. Faisal is the lead author of the proposal (N3418), with Herb Sutter and Dave Abrahams.

To read and participate in the active discussion, see the message thread on std-proposals.

Here is a copy of Faisal’s announcement…

Read more at isocpp.org…

Compatibility

On yesterday’s thread, I just wrote in a comment:

@Jon: Yes, C++ is complex and the complexity is largely because of C compatibility. I agree with Bjarne that there’s a small language struggling to get out — I’ve participated in private experiments to specify such a language, and you can do it in well under half the complexity of C++ without losing the expressivity and flexibility of C++. However, it does require changes to syntax (mainly) and semantics (secondarily) that amount to making it a new language, which makes it a massive breaking change for programs (where existing code doesn’t compile and would need to be rewritten manually or with a tool) and programmers (where existing developers need retraining). That’s a very big tradeoff. So just as C++ ‘won’ in the 90s because of its own strengths plus its C compatibility, C++11 is being adopted because of its own strengths plus its C++98 compatibility. At the end of the day compatibility continues to be a very strong force and advantage that isn’t lightly tossed aside to “improve” things. However, it’s still worthwhile and some interesting things may come of it. Stay tuned, but expect (possible) news in years rather than months.

That reminded me of a snippet from the September 2012 issues of CACM. When I first read it, I thought it was so worthwhile that I made a magnified copy and pasted it on the window outside my office at work – it’s still there. The lightly annotated conclusion is shown here.

imageThe article itself was about a different technology, but the Lessons Learned conclusion remind us of the paramount importance of two things:

  • Backward compatibility with simple migration path (“compat”). That’s what keeps you “in the ecosystem.” If you don’t have such compatibility, don’t expect wide adoption unless there are hugely compelling reasons to put up with the breaking change. It’s roughly the same kind of ecosystem change for programmers to go from Language X to Language Y as for users to go from Platform X to Platform Y (e.g., Windows to Mac, iPhone to Android) – in each case you have a lot of relearning, and you lose understood tools and services some of which have replacements and some of which don’t.
  • Focusing on complete end-to-end solutions (“SFE” or scenario-focused engineering). This is why it’s important not to ship just a bunch of individual features, because that can leave holes where Steps 1, 2, and 4 of an end-to-end experience are wonderful but Step 3 is awkward, unreliable, or incomplete, which renders much of the good work in the other steps unuseful since they can’t be used as much or possibly at all.

Enjoy.

Perspective: “Why C++ Is Not ‘Back’”

John Sonmez wrote a nice article on the weekend – both the article and the comments are worth reading.

“Why C++ Is Not ‘Back’”

by John Sonmez

I love C++. […] There are plenty of excellent developers I know today that still use C++ and teach others how to use it and there is nothing at all wrong with that.

So what is the problem then?

[…] Everyone keeps asking me if they need to learn C++, but just like my answer was a few years ago, it is the same today—NO!

Ok, so “NO” in caps is a bit harsh.  A better answer is “why?”

[…]

Although I don’t agree with everything John says, he presents something quite valuable, and unfortunately rare: a thoughtful hype-free opinion. This is valuable especially when (not just “even when”) it differs from your own opinion, because combining different thoughtful views of the same thing gives something exceedingly important and precious: perspective.

By definition, depth perception is something you get from seeing and combining more than one point of view. This is why one of my favorite parts of any conference or group interview is discussions between experts on points where they disagree – because experts do regularly disagree, and when they each present their thoughtful reasons and qualifications and specific cases (not just hype), you get to see why a point is valid, when it is valid and not valid, how it applies to a specific situation or doesn’t, and so on.

I encourage you to read the article and the comments. This quote isn’t the only thing I don’t fully agree with, but it’s the one thing I’ll respond to a little from the article:

There are only about three sensible reasons to learn C++ today that I can think of.

There are other major reasons in addition to those, such as:

  • Servicing, which is harder when you depend on a runtime.
  • Testing, since you lose the ability to test your entire application (compare doing all-static or mostly-static linking with having your application often be compiled/jitted for the first time on an end user’s machine).

These aren’t bad things in themselves, just tradeoffs and reasons to prefer native code vs managed code depending on your application’s needs.

But even the three points John does mention are very broad reasons that apply often, especially the issue of control over memory layouts, to the point where I would say: In any language, if you are serious about performance you will be using arrays a lot (not “always,” just “a lot”). Some languages make that easier and give you much better control over layout in general and arrays in particular, while other languages/environments make it harder (possible! but harder) and you have to “opt out” or “work against” the language’s/runtime’s strong preference for pointer-chasing node-based data structures.

For more, please see also my April 2012 Lang.NEXT talk “(Not Your Father’s) C++”, especially the second half:

  • From 19:20, I try to contrast the value and tenets of C++ and of managed language/environments, including that each is making a legitimate and useful design choice.
  • From 36:00, I try to address the question of: “can managed languages do essentially everything that’s interesting, so native C/C++ code should be really just for things like device drivers?”
  • From 39:00, I talk about when each one is valuable and should be used, especially that if programmer time is your biggest cost, that’s what you should optimize for and it’s exactly what managed languages/environments are designed for.

But again, I encourage you to read John’s article – and the comments – yourself. There’s an unusually high signal-to-noise ratio here.

It’s always nice to encounter a thoughtful balanced piece of writing, whether or not we agree with every detail. Thanks, John!

“256 cores by 2013”?

I just saw a tweet that’s worth commenting on:

image

Almost right, and we have already reached that.

I said something similar to the above, but with two important differences:

  1. I said hardware “threads,” not only hardware “cores” – it was about the amount of hardware parallelism available on a mainstream system.
  2. What I gave was a min/max range estimate of roughly 16 to 256 (the latter being threads) under different sets of assumptions.

So: Was I was right about 2013 estimates?

Yes, pretty much, and in fact we already reached or exceeded that in 2011 and 2012:

  • Lower estimate line: In 2011 and 2012 parts, Intel Core i7 Sandy Bridge and Ivy Bridge are delivering almost the expected lower baseline, and offering 8-way and 12-way parallelism = 4-6 cores x 2 hardware threads per core.
  • Upper estimate line: In 2012, as mentioned in the article (which called it Larrabee, now known as MIC or Xeon Phi) is delivering 200-way to 256-way parallelism = 50-64 cores x 4 hardware threads per core. Also, in 2011 and 2012, GPUs have since emerged into more mainstream use for computation (GPGPU), and likewise offer massive compute parallelism, such as 1,536-way parallelism on a machine having a single NVidia Tesla card.

Yes, mainstream machines do in fact have examples of both ends of the “16 to 256 way parallelism” range. And beyond the upper end of the range, in fact, for those with higher-end graphics cards.

For more on these various kinds of compute cores and threads, see also my article Welcome to the Jungle.

 

Longer answer follows:

Here’s the main part from article, “Design for Manycore Systems” (August 11, 2009). Remember this was written over three years ago – in the Time Before iPad, when Android was under a year old:

How Much Scalability Does Your Application Need?

So how much parallel scalability should you aim to support in the application you‘re working on today, assuming that it’s compute-bound already or you can add killer features that are compute-bound and also amenable to parallel execution? The answer is that you want to match your application’s scalability to the amount of hardware parallelism in the target hardware that will be available during your application’s expected production or shelf lifetime. As shown in Figure 4, that equates to the number of hardware threads you expect to have on your end users’ machines.

Figure 4: How much concurrency does your program need in order to exploit given hardware?

Let’s say that YourCurrentApplication 1.0 will ship next year (mid-2010), and you expect that it’ll be another 18 months until you ship the 2.0 release (early 2012) and probably another 18 months after that before most users will have upgraded (mid-2013). Then you’d be interested in judging what will be the likely mainstream hardware target up to mid-2013.

If we stick with "just more of the same" as in Figure 2’s extrapolation, we’d expect aggressive early hardware adopters to be running 16-core machines (possibly double that if they’re aggressive enough to run dual-CPU workstations with two sockets), and we’d likely expect most general mainstream users to have 4-, 8- or maybe a smattering of 16-core machines (accounting for the time for new chips to be adopted in the marketplace). [[Note: I often get lazy and say “core” to mean all hardware parallelism. In context above and below, it’s clear we’re talking about “cores and threads.”]]

But what if the gating factor, parallel-ready software, goes away? Then CPU vendors would be free to take advantage of options like the one-time 16-fold hardware parallelism jump illustrated in Figure 3, and we get an envelope like that shown in Figure 5.

Figure 5: Extrapolation of “more of the same big cores” and “possible one-time switch to 4x smaller cores plus 4x threads per core” (not counting some transistors being used for other things like on-chip GPUs).

First, let’s look at the lower baseline, ‘most general mainstream users to have [4-16 way parallelism] machines in 2013’? So where are were in 2012 today for mainstream CPU hardware parallelism? Well, Intel Core i7 (e.g., Sandy Bridge, Ivy Bridge) are typically in the 4 to 6 core range – which, with hyperthreading == hardware threads, means 8 to 12 hardware threads.

Second, what about the higher potential line for 2013? As noted above:

  • Intel’s Xeon Phi (then Larrabee) is now delivering 50-64 cores x 4 threads = 200 to 256-way parallelism. That’s no surprise, because this article’s upper line was based on exactly the Larrabee data point (see quote below).
  • GPUs already blow the 256 upper bound away – any machine with a two-year-old Tesla has 1,536-way parallelism for programs (including mainstream programs like DVD encoders) that can harness the GPU.

So not only did we already reach the 2013 upper line early, in 2012, but we already exceeded it for applications that can harness the GPU for computation.

As I said in the article:

I don’t believe either the bottom line or the top line is the exact truth, but as long as sufficient parallel-capable software comes along, the truth will probably be somewhere in between, especially if we have processors that offer a mix of large- and small-core chips, or that use some chip real estate to bring GPUs or other devices on-die. That’s more hardware parallelism, and sooner, than most mainstream developers I’ve encountered expect.

Interestingly, though, we already noted two current examples: Sun’s Niagara, and Intel’s Larrabee, already provide double-digit parallelism in mainstream hardware via smaller cores with four or eight hardware threads each. "Manycore" chips, or perhaps more correctly "manythread" chips, are just waiting to enter the mainstream. Intel could have built a nice 100-core part in 2006. The gating factor is the software that can exploit the hardware parallelism; that is, the gating factor is you and me.

Podcast: Interview on Hanselminutes

imageA few weeks ago at the Build conference, Scott Hanselman and I sat down to talk about C++ and modern UI/UX. The podcast is now live here:

The Hanselminutes Podcast, Show #346
“Why C++” with Herb Sutter

Topics Scott raises include:

  • 2:00 Scott mentions he has used C++ in the past. C++ has changed. We still call it C++, but it’s a very different language now.
  • 5:30 (Why) do we care about performance any more?
  • 10:00 What’s this GPGPU thing? Think of your GPU as your modern 80387.
  • 13:45 C++ is having a resurgence. Where is C++ big?
  • 18:00 Why not just use one language? or, What is C++ good at? Efficient abstraction and portability.
  • 21:45 Programmers have a responsibility to support the business. Avoid the pitfall of speeds & feeds.
  • 24:00 My experience with my iPad, my iPhone, and my Slate 7 with Win8.
  • 28:45 We’re in two election seasons – (a) political and (b) technology (Nexus, iPad Mini, Surface, …). Everyone is wallpapering the media with ads (some of them attack ads), and vying for customer votes/$$, and seeing who’s going to be the winner.
  • 35:00 Natural user interfaces – we get so easily used to touch that we paw all screen, and Scott’s son gets so used to saying “Xbox pause” that anything that doesn’t respond is “broken.”

Reader Q&A: A good book to learn C++11?

Last night a reader asked one of the questions that helped motivate the creation of isocpp.org:

I am trying to learn the new C++. I am wondering if you are aware of resources or courses that can help me learn a little. I was not able to find any books for C++11. Any help would be greatly appreciated.

By the way, the isocpp website is great :)

Thanks! And you beat me to the punchline. :)

A good place to start is isocpp.org/get-started. It recommends four books, three of which have been updated for C++11 (and two of those three are available now). C++ Primer 5e is a good choice.

We also maintain a list of new articles and books under isocpp.org/blog/category/articles-books.

As for courses, there’s also a growing list of good courses there under isocpp.org/blog/category/training, including two C++11 overview courses by Scott Meyers and Dave Abrahams respectively.

As with all blog categories, you have three ways to find the most recent items under Articles & Books, Training, and other categories:

  • Home page: The most recent items in each category are always listed right on the site’s newspaper-inspired home page.
  • RSS: You can subscribe to RSS feeds for all posts, or for specific categories to get notifications of new articles, books, training, events, and more as they become available.
  • Twitter: Follow @isocpp for notifications of new stuff.

The site and blog are curated, aiming for higher quality and lower volume. Even so, we know you won’t be able to read, watch, or listen to all of the content. So we’re doing our best to make sure that when you do pick something recommended by the site, it’s likely to be of high quality and address what you’re looking to find.

Enjoy! This is version 1 of the site… I hope that it’s already useful, and you’ll see quite a bit more over the coming year.

Friday’s Q&A session now online

imageMy live Q&A after Friday’s The Future of C++ talk is now online on Channel 9. The topics revolved around…

… recent progress and near-future directions for C++, both at Microsoft and across the industry, and talks about some announcements related to C++11 support in VC++ 2012 and the formation of the Standard C++ Foundation.

Herb takes questions from a live virtual audience and demos the new http://isocpp.org site on an 82 inch Perceptive Pixel display attached to a Windows 8 machine.

Thanks to everyone who tuned in.

Our industry is young again, and it’s all about UI

Jeff Atwood’s post two days ago inspired me to write this down. Thanks, Jeff.

“I can’t even remember the last time I was this excited about a computer.”

Jeff Atwood, November 1, 2012


Our industry is young again, full of the bliss and sense of wonder and promise of adventure that comes with youth.

Computing feels young and fresh in a way that it hasn’t felt for years, and that has only happened to this degree at two other times in its history. Many old-timers, including myself, have said “this feels like 1980 again.”

It does indeed. And the reason why is all about user interfaces (UI).

Wave 1: Late 1950s through 60s

First, computing felt young in the late 1950s through the 60s because it was young, and made computers personally available to a select few people. Having computers at all was new, and the ability to make a machine do things opened up a whole new world for a band of pioneers like Dijkstra and Hoare, and Russell (Spacewar!) and Engelbart (Mother of All Demos) who made these computers personal for at least a few people.

The machines were useful. But the excitement came from personally interacting with the machine.

Wave 2: Late 1970s through 80s

Second, computing felt young again in the late 1970s and 80s. Then, truly personal single-user computers were new. They opened up to a far wider audience the sense of wonder that came with having a computer of our very own, and often even with a colorful graphical interface to draw us into its new worlds. I’ll include Woods and Crowther (ADVENT) as an example, because they used a PDP as a personal computer (smile) and their game and many more like it took off on the earliest true PCs – Exidy Sorcerers and TRS-80s, Ataris and Apples. This was the second and much bigger wave of delivering computers we could personally interact with.

The machines were somewhat useful; people kept trying to justify paying $1,000 for one “to organize recipes.” (Really.) But the real reason people wanted them was that they were more intimate – the excitement once again came from personally interacting with the machine.

Non-wave: 1990s through mid-2000s

Although WIMP interfaces proliferated in the 1990s and did deliver benefits and usability, they were never as exciting to the degree computers were in the 80s. Why not? Because they weren’t nearly as transformative in making computers more personal, more fun. And then, to add insult to injury, once we shipped WIMPiness throughout the industry, we called it good for a decade and innovation in user interfaces stagnated.

I heard many people wonder whether computing was done, whether this was all there would be. Thanks, Apple, for once again taking the lead in proving them wrong.

Wave 3: Late 2000s through the 10s

Now, starting in the late 2000s and through the 10s, modern mobile computers are new and more personal than ever, and they’re just getting started. But what makes them so much more personal? There are three components of the new age of computing, and they’re all about UI (user interfaces)… count ’em:

  1. Touch.
  2. Speech.
  3. Gestures.

Now don’t get me wrong, these are in addition to keyboards and accurate pointing (mice, trackpads) and writing (pens), not instead of them. I don’t believe for a minute that keyboards and mice and pens are going away, because they’re incredibly useful – I agree with Joey Hess (HT to @codinghorror):

“If it doesn’t have a keyboard, I feel that my thoughts are being forced out through a straw.”

Nevertheless, touch, speech, and gestures are clearly important. Why? Because interacting with touch and speech and gestures is how we’re made, and that’s what lets these interactions power a new wave of making computers more personal. All three are coming to the mainstream in about that order…

Four predictions

… and all three aren’t done, they’re just getting started, and we can now see that at least the first two are inevitable. Consider:

Touchable screens on smartphones and tablets is just the beginning. Once we taste the ability to touch any screen, we immediately want and expect all screens to respond to touch. One year from now, when more people have had a taste of it, no one will question whether notebooks and monitors should respond to touch – though maybe a few will still question touch televisions. Two years from now, we’ll just assume that every screen should be touchable, and soon we’ll forget it was ever any other way. Anyone set on building non-touch mainstream screens of any size is on the wrong side of history.

Speech recognition on phones and in the living room is just the beginning. This week I recorded a podcast with Scott Hanselman which will air in another week or two, when Scott shared something he observed firsthand in his son: Once a child experiences saying “Xbox Pause,” he will expect all entertainment devices to respond to speech commands, and if they don’t they’re “broken.” Two years from now, speech will probably be the norm as one way to deliver primary commands. (Insert Scotty joke here.)

Likewise, gestures to control entertainment and games in the living room is just the beginning. Over the past year or two, when giving talks I’ve sometimes enjoyed messing with audiences by “changing” a PowerPoint slide by gesturing in the air in front of the screen while really changing the slide with the remote in my pocket. I immediately share the joke, of course, and we all have a laugh together, but the audience members more and more often just think it’s a new product and expect it to work. Gestures aren’t just for John Anderton any more.

Bringing touch and speech and gestures to all devices is a thrilling experience. They are just the beginning of the new wave that’s still growing. And this is the most personal wave so far.

This is an exciting and wonderful time to be part of our industry.

Computing is being reborn, again; we are young again.

Talk now online: The Future of C++ (VC++, ISO C++)

imageYesterday, many thousands of you were in the room or live online for my talk on The Future of C++. The talk is now available online.

This has been a phenomenal year for C++, since C++11’s publication just 12 months ago. And yesterday was a great day for C++.

Yesterday I had the privilege of announcing much of what Microsoft and the industry have been working on over the past year.

(minor) C++ at Microsoft

On September 12, we shipped VC++ 2012 with the complete C++11 standard library, and adding support for C++11 range-for, enum class, override and final. Less than two months later, yesterday we announced and shipped the November 2012 CTP, a compiler add-in to VC++ 2012 adding C++11 variadic templates, uniform initialization and initializer_lists, delegating constructors, function template default arguments, explicit conversion operators, and raw string literals. Details here, and download here.

Note that this is just the first batch of additional C++11 features. Expect further announcements and deliveries in the first half of 2013.

(major) C++ across the industry

Interest and investment in C++ continues to accelerate across the software world.

  • ISO C++ standardization is accelerating. Major companies are dedicating more people and resources to C++ standardization than they have in years. Over the next 24 months, we plan to ship three Technical Specifications and a new C++ International Standard.
  • C++ now has a home on the web at isocpp.org. Launched yesterday, it both aggregates the best C++ content and hosts new content itself, including Bjarne Stroustrup’s new Tour of C++ and Scott Meyers’ new Universal References article.
  • We now have a Standard C++ Foundation. Announced yesterday, it is already funded by the largest companies in the industry down to startups, financial institutions to universities, book publishers to other consortia, with more members joining weekly. For the first time in C++’s history since AT&T relinquished control of the language, we have an entity – a trade organization – that exists exclusively to promote Standard C++ on all compilers and platforms, and companies are funding it because the world runs on C++, and investing in Standard C++ is good business.

This is an exciting time to be part of our industry, on any OS and using any language. It’s especially an exciting time to be involved with C++ on all compilers and platforms.

Thank you all, whatever platform and language you use, for being part of it.

Links: