Kindling

Two weeks ago, I broke down and bought a Kindle. I like it:

  • It’s a good and well-designed reader, and the experience is much better than the other e-book reading I’ve done before on phones and PDAs. I like how when you bookmark a page, you can see it… the corner of the page gets a little dog-ear. [1]
  • It’s got a nice e-paper screen that uses ambient light, not backlight, which makes it readable anywhere just like a printed page — it’s even better, not worse, in direct sunlight.
  • It’s light and thin and sturdy. Sure beats carrying three or four books on a trip.
  • It has great battery life. I’ve only charged it once so far, when I first received it… since then I’ve had it for 11 days and read a full book and a half, and it still has 75% of its first charge left. (It helps that I turn the wireless off unless I’m actively using it.)
  • Fast, free wireless everywhere in the U.S., without computers or WiFi.

But today, it transformed my reading experience.

This morning, I was unsuspectingly reading my feeds in Google Reader as usual, blissfully unaware that the way I read books was about to change. Among other articles, I noticed that Slashdot ran a book review of Inside Steve’s Brain (that’s Jobs, not Wozniak or Ballmer). The review made me want to read the book. That’s when the new reality started, because I was interested in the book now, and had time to start reading it now:

  • Normally, I would have ordered it from Amazon and waited for it to arrive. But what usually happens then is that the book arrives a week later, and when it gets here I don’t have time to start it right away or I don’t quite feel like that kind of book just at the moment… and it goes on a shelf, with a 70% probability of being picked up at some point in the future.
  • Today, I searched for the book title on my Kindle, clicked “Buy”, and a few minutes later started reading Inside Steve’s Brain while eating lunch. [2]

That convenience isn’t merely instant gratification, it’s transformative. I suspect I’m going to be reading even more books now, even though I have a few little nits with the device, such as that the next and previous page buttons are a little too easy to press in some positions.

In other news, the Kindle also supports reading blogs and newspapers and some web surfing, but those are less compelling for me because I tend to do those things in the context of work, which means I’m already sitting at a computer with a bigger color screen and full keyboard. Maybe someday I’ll do it on e-paper. Until then, just living inside a virtual bookstore is plenty for me. Kindle + the Amazon Kindle store = iPod + iTunes for books. [3]

Here’s a useful summary article on Kindle features from a prospective user’s point of view.

Notes

1. The first two books I downloaded? The Design of Everyday Things, which was interestingly apropros to read on a new device like the Kindle with its nice dog-ear feedback and other well-designed features, and Foundation, which I hadn’t read in ages.

2. And it cost less than half what the dead-tree version would (though the latter was hardcover).

3. Caveat: I’m not actually an iPod owner, and I hate how Apple keeps insisting on installing iTunes on my computer just because I have Safari or QuickTime installed (mutter grumble evil product-tying monopolies mutter grumble :-) ). But apparently everyone else loves them, and they have indeed changed their industry.

Hungarian Notation Is Clearly (Good|Bad)

A commenter asked:

thread_local X tlsX; ??

Herb, I hope you aren’t backtracking on Hungarian Notation now that you work for Microsoft. Say it aint so…

It ain’t so. Besides, Microsoft’s Framework Developer’s Guide prominently intones: “Do not use Hungarian notation.”

Warts like “tls” and “i” are about lifetime and usage, not type. Here “tls” denotes that each thread gets its own copy of the value that is constructed and destroyed once per thread that uses it (lifetime) and doesn’t need to be synchronized using mutexes (usage). As another example of usage warts, I’ll also use “i” for a variable that’s used as an index or for iteration — and given that the “i” convention goes back to before BASIC, we shouldn’t try to pin the Hungarian tail on that donkey.

Having said that, though, many people have variously decried and defended different forms of Hungarian, and you may notice a pattern… they’re mostly:

Today, “Hungarian” nearly always means Systems Hungarian. The main trouble with Systems Hungarian comes from trying to embed information about a variable’s type into the variable’s name by prepending an encoded wart like the venerable sz, pach, ul, and their ilk. Although potentially helpful in a weakly-typed language like C, that’s known to be brittle and the prefixes tend to turn into lies as variable types morph during maintenance. The warting systems also don’t extend well to user-defined types and templates.

I’ve railed against the limitations and perils of Hungarian with Andrei Alexandrescu in our book C++ Coding Standards, where I made sure it was stated right up front as part of “Item 0: Don’t sweat the small stuff (or, Know what not to standardize)”:

Example 3: Hungarian notation. Notations that incorporate type information in variable names have mixed utility in type-unsafe languages (notably C), are possible but have no benefits (only drawbacks) in object-oriented languages, and are impossible in generic programming. Therefore, no C++ coding standard should require Hungarian notation, though a C++ coding standard might legitimately choose to ban it.

and with Jim Hyslop in our article “Hungarian wartHogs”:

“… the compiler already knows much more than you do about an object’s type. Changing the variable’s name to embed type information adds little value and is in fact brittle. And if there ever was reason to use some Hungarian notation in C-style languages, which is debatable, there certainly remains no value when using type-safe languages.”

There’s an amusing real-world note later in that article. I’ll pick up where the Guru is saying:

“I recall only one time when Hungarian notation was useful on a project. … One of the programmers on the project was named Paul,” the Guru explained. “Several months into the project, while still struggling to grow a ponytail and build his report-writing module, he pointed out that Hungarian notation had helped him find a sense of identity, for he now knew what he was…” She paused.

I blinked. It took me about ten seconds, and then I shut my eyes and grimaced painfully. “Pointer to array of unsigned long,” I groaned.

She smiled, enjoying my pain. “True story,” she said.

It is indeed a true story. I worked with him in 1993. The joke was bad then, too, but at least using Hungarian was more defensible because the project’s code was written in C. I found it handy at the time, but now that I work in more strongly typed languages I find the type-embedding version of Hungarian more actively harmful than actually useful.

You might call me a Hungarian emigrant, now living happily as an expat out in eastern C++ near the Isle of Managed Languages.

Trip Report: June 2008 ISO C++ Standards Meeting

The ISO C++ committee met in Sophia Antipolis, France on June 6-14. You can find the minutes here (note that these cover only the whole-group sessions, not the breakout technical sessions where we spend most of the week).

Here’s a summary of what we did, with links to the relevant papers to read for more details, and information about upcoming meetings.

Highlights: Complete C++0x draft coming in September

The biggest goal entering this meeting was to make C++0x feature-complete and stay on track to publish a complete public draft of C++0x this September for international review and comment — in ISO-speak, an official Committee Draft or CD. We are going to achieve that, so the world will know the shape of C++0x in good detail this fall.

We’re also now planning to have two rounds of international comment review instead of just one, to give the world a good look at the standard and two opportunities for national bodies to give their comments, the first round starting after our September 2008 meeting and the second round probably a year later. However, the September 2008 CD is “it”, feature-complete C++0x. The only changes expected to be made between that CD and the final International Standard are bug fixes and clarifications. It’s helpful to think of a CD as a feature-complete beta.

Coming into the June meeting, we already had a nearly-complete C++0x internal working draft — most features that will be part of C++0x had already been “checked in.” Only a few were still waiting to become stable enough to vote in, including initializer lists, range-based for loops, and concepts.

Of these, concepts is the long-pole feature for C++0x, which isn’t surprising given that it’s the biggest new language feature we’re adding to Standard C++. A primary goal of the June meeting, therefore, was to make as much progress on concepts as possible, and to see if it would be possible to vote that feature into the C++0x working draft at this meeting. We almost did that, thanks to a lot of work not only in France but also at smaller meetings throughout the winter and spring: For the first time, we ended a meeting with no known issues or controversial points in the concepts standardese wording, and we expect to “check in” concepts into the working draft at the next meeting in September, which both cleared the way for us to publish a complete draft then and motivated the plan to do two rounds of public review rather than one, just to make sure the standard got enough “bake time” in its complete form.

Next, I’ll summarize some of the major features voted into the draft at the June meeting.

Initializer lists

C++0x initializer lists accomplish two main objectives:

  • Uniformity: They provide a uniform initialization syntax you can use consistently everywhere, which is especially helpful when you write templates.
  • Convenience: They provide a general-purpose way of using the C initializer syntax for all sorts of types, notably containers.

See N2215 for more on the motivation and original design, and N2672 and N2679 for the final standardese.

Example: Initializing aggregates vs. classes

It’s convenient that we can initialize aggregates like this:

struct Coordinate1 {
  int i;
  int j;
  //…
};

Coordinate1 c1 = { 1, 2 };

but the syntax is slightly different for classes with constructors:

class Coordinate2 {
public:
  Coordinate2( int i, int j );
  // …
};

Coordinate2 c2( 1, 2 );

In C++0x, you can still do all of the above, but initializer lists give us a regular way to initialize all kinds of types:

Coordinate1 c1 = { 1, 2 };
Coordinate2 c2 = { 1, 2 }; // legal in C++0x

Having a uniform initialization syntax is particularly helpful when writing template code, so that the template can easily work with a wide variety of types.

Example: Initializing arrays vs. containers

One place where the lack of uniform initialization has been particularly annoying — at least to me, when I write test harnesses to exercise the code I show in articles and talks — is when initializing a container with some default values. Don’t you hate it when you want to create a container initialized to some known values, and if it were an array you can just write:

string a[] = { “xyzzy”, “plugh”, “abracadabra” };

but if it’s a container like a vector, you have to default-construct the container and then push every entry onto it individually:

// Initialize by hand today
vector<string> v;
v.push_back( “xyzzy” );
v.push_back( “plugh” );
v.push_back( “abracadabra” );

or, even more embarrassingly, initialize an array first for convenience and then construct the vector as a copy of the array, using the vector constructor that takes a range as an iterator pair:

// Initialize via an array today
string a[] = { “xyzzy”, “plugh”, “abracadabra” }; // put it into an array first
vector<string> v( a, a+3 ); // then construct the vector as a copy

Arrays are weaker than containers in nearly every other way, so it’s annoying that they get this unique convenience just because of their having been built into the language since the early days of C.

The lack of convenient initialization has been even more irritating with maps:

// Initialize by hand today
map<string,string> phonebook;
phonebook[ “Bjarne Stroustrup (cell)” ] = “+1 (212) 555-1212”;
phonebook[ “Tom Petty (home)” ] = “+1 (858) 555-9734”;
phonebook[ “Amy Winehouse (agent)” ] = “+44 20 74851424”;

In C++0x, we can initialize any container with known values as conveniently as arrays:

// Can use initializer list in C++0x
vector<string> v = { “xyzzy”, “plugh”, “abracadabra” };
map<string,string> phonebook =
  { { “Bjarne Stroustrup (cell)”, “+1 (212) 555-1212” },
    { “Tom Petty (home)”, “+1 (858) 555-9734” },
    { “Amy Winehouse (agent)”, “+44 99 74855424” } };

As a bonus, a uniform initialization syntax even makes arrays easier to deal with. For example, the array initializer syntax didn’t support arrays that are dynamically allocated or class members arrays:

// Initialize dynamically allocated array by hand today
int* a = new int[3];
a[0] = 1;
a[1] = 2;
a[2] = 99;

// Initialize member array by hand today
class X {
  int a[3];
public:
  X() { a[0] = 1; a[1] = 2; a[2] = 99; }
};

In C++0x, the array initialization syntax is uniformly available in these cases too:

// C++0x

int* a = new int[3] { 1, 2, 99 };

class X {
  int a[3];
public:
  X() : a{ 1, 2, 99 } {}
};

More concurrency support

Last October, we already voted in a state-of-the-art memory model, atomic operations, and a threading package. That covered the major things we wanted to see in this standard, but a few more were still in progress. Here are the major changes we made this time that relate to concurrency.

Thread-local storage (N2659): To declare a variable which will be instantiated once for each thread, use the thread_local storage class. For example:

class MyClass {
  …
private:
  thread_local X tlsX;
};
void SomeFunc() {
  thread_local Y tlsY;
  …
};

Dynamic initialization and destruction with concurrency (N2660) handles two major cases:

  • Static and global variables can be concurrently initialized and destroyed if you try to access them on multiple threads before main() begins. If more than one thread could initialize (or use) the variable concurrency, however, it’s up to you to synchronize access.
  • Function-local static variables will have their initialization automatically protected; while one thread is initializing the variable, any other threads that enter the function and reach the variable’s declaration will wait for the initialization to complete before they continue on. Therefore you don’t need to guard initialization races or initialization-use races, and if the variable is immutable once constructed then you don’t need to do any synchronization at all to use it safely. If the variable can be written to after construction, you do still need to make sure you synchronize those post-initialization use-use races on the variable.

Thread safety guarantees for the standard library (N2669): The upshot is that the answer to questions like “what do I need to do to use a vector<T> v or a shared_ptr<U> sp thread-safely?” is now explicitly “same as any other object: if you know that an object like v or sp is shared, you must synchronize access to it.” Only a very few objects need to guarantee internal synchronization; the global allocator is one of those, though, and so it gets that stronger guarantee.

The “other features” section below also includes a few smaller concurrency-related items.

More STL algorithms (N2666)

We now have the following new STL algorithms. Some of them fill holes (e.g., “why isn’t there a copy_if or a counted version of algorithm X?”) and others provide handy extensions (e.g., “why isn’t there an all_of or any_of?”).

  • all_of( first, last, pred )
    returns true iff all elements in the range satisfy pred
  • any_of( first, last, pred )
    returns true iff any element in the range satisfies pred
  • copy_if( first, last result, pred)
    the “why isn’t this in the standard?” poster child algorithm
  • copy_n( first, n result)
    copy for a known number of n elements
  • find_if_not( first, last, pred )
    returns an iterator to the first element that does not satisfy pred
  • iota( first, last, value )
    for each element in the range, assigns value and increments value “as if by ++value”
  • is_partitioned( first, last, pred )
    returns true iff the range is already partitioned by pred; that is, all elements that satisfy pred appear before those that don’t
  • none_of( first, last, pred )
    returns true iff none of the elements in the range satisfies pred
  • partition_copy( first, last, out_true, out_false, pred )
    copy the elements that satisfy pred to out_true, the others to out_false
  • partition_point( first, last, pred)
    assuming the range is already partitioned by pred (see is_partitioned above), returns an iterator to the first element that doesn’t satisfy pred
  • uninitialized_copy_n( first, n, result )
    uninitialized_copy for a known number of n elements

Other approved features

  • N2435 Explicit bool for smart pointers
  • N2514 Implicit conversion operators for atomics
  • N2657 Local and unnamed types as template arguments
  • N2658 Constness of lambda functions
  • N2667 Reserved namespaces for POSIX
  • N2670 Minimal support for garbage collection and reachability-based leak detection 
  • N2661 Time and duration support
  • N2674 shared_ptr atomic access
  • N2678 Error handling specification for Chapter 30 (Threads)
  • N2680 Placement insert for standard containers

Next Meeting

Although we just got back from France, the next meeting of the ISO C++ standards committee is already coming up fast:

The meetings are public, and if you’re in the area please feel free to drop by.

Seneca and Shakespeare on Goals and Opportunities

From the ancient dramatist Seneca the Younger:

“Our plans miscarry because they have no aim. When a man does not know what harbor he is making for, no wind is the right wind.”

And from the Bard, not to be outdone in metaphors of ships and seas:

“There is a tide in the affairs of men,
Which, taken at the flood, leads on to fortune;
Omitted, all the voyage of their life
Is bound in shallows and miseries.
We must take the current when it serves,
Or lose our ventures.”

Talking Lambdas with Bill Gates on BBC

[6/25: Added YouTube availability and notes.]

A few weeks ago, the BBC was in town to tape a special interview/documentary on Bill Gates. As part of the footage they got, there’s a Bill-in-a-technical-review-meeting shot that includes yours truly at a whiteboard presenting an overview-plus-drilldown on C++0x lambda functions. It was a good review; Bill’s a sharp guy with a broad and deep background and incisive questions.

Here’s the trailer and the program [YouTube]. Some bits of our VC++ team review start around 4:30 of part one and 8:05 of part two, with a few other shots later on. If you watch really closely, you can see a brief flash of a guy in a blue shirt at a whiteboard with code on it in HerbWhiteboardScrawl Sans 60pt, and then Bill gesturing and making comments about simulating lambdas using macros instead of baking them into the language, and variable capture consequences.

The BBC documentary “How a Geek Changed the World” first aired today in the UK on BBC2; I’m guessing it will repeat. I don’t know when it will air in the U.S, but as of this writing it’s on YouTube.

Type Inference vs. Static/Dynamic Typing

Jeff Atwood just wrote a nice piece on why type inference is convenient, using a C# sample:

I was absolutely thrilled to be able to refactor this code:

StringBuilder sb = new StringBuilder(256);
UTF8Encoding e = new UTF8Encoding();
MD5CryptoServiceProvider md5 = new MD5CryptoServiceProvider();

Into this:

var sb = new StringBuilder(256);
var e = new UTF8Encoding();
var md5 = new MD5CryptoServiceProvider();

It’s not dynamic typing, per se; C# is still very much a statically typed language. It’s more of a compiler trick, a baby step toward a world of Static Typing Where Possible, and Dynamic Typing When Needed.

It’s worth making a stronger demarcation among:

  • type inference, which you can do in any language
  • static vs. dynamic typing, which is completely orthogonal but all too often confused with inference
  • strong vs. weak typing, which is mostly orthogonal (e.g., C is statically typed because every variable has a statically known actual type, but also weakly typed because of its casts)

Above, Jeff explicitly separates inference and dynamic-ness. Unfortunately, later on he proceeds to imply that inference is a small step toward dynamic typing, which is stylistically true in principle but might mislead some readers into thinking inference has something to do with dynamic-ness, which it doesn’t.

Type Inference

Many languages, including C# (as shown above) and the next C++ standard (C++0x, shown below), provide type inference. C++0x does it via the repurposed auto keyword. For example, say you have an object m of type map<int,list<string>>, and you want to create an iterator to it:

map<int,list<string>>::iterator i = m.begin(); // type is required in today’s C++, allowed in C++0x
auto i = m.begin(); // type can be inferred in C++0x

How many times have you said to your compiler, “Compiler, you know the type already, why are you making me repeat it?!” Even the IDE can tell you what the type is when you hover over an expression.

Well, in C++0x you won’t have to any more, which is often niftily convenient. This gets increasingly important as we don’t want to, or can’t, write out the type ourselves, because we have:

  • types with more complicated names
  • types without names (or hard-to-find names)
  • types held most conveniently via an indirection

In particular, consider that C++0x lambda functions generate a function object whose type you generally can’t spell, so if you want to hold that function object and don’t have auto then you generally have to use an indirection:

function<void(void)> f = [] { DoSomething(); };
auto f = [] { DoSomething(); };
// hold via a wrapper — requires indirection
// infer the type and bind directly

Note that the last line above is more efficient than the C equivalent using a pointer to function, because C++ lets you inline everything. For more on this, see Item 46 in Scott Meyers’ Effective STL on why it’s preferable to use function objects rather than functions, because (counterintuitively) they’re more efficient.

Now, though there’s no question auto and var are great, there are some minor limitations. In particular, you may not want the exact type, but another type that can be converted to:

map<int,list<string>>::const_iterator ci = m.begin(); // ci’s type is map<int,list<string>>::const_iterator
auto i = m.begin(); // i’s type is map<int,list<string>>::iterator
Widget* w = new Widget();
const Widget* cw = new Widget();
WidgetBase* wb = new Widget();
shared_ptr<Widget> spw( new Widget() );
// w’s type is Widget*
// cw’s type is const Widget*
// wb’s type is WidgetBase*
// spw’s type is shared_ptr<Widget>
auto w = new Widget(); // w’s type is Widget*

So C++0x auto (like C# var) only gets you the most obvious type. Still and all, that does cover a lot of the cases.

The important thing to note in all of the above examples is that, regardless how you spell it, every variable has a clear, unambiguous, well-known and predictable static type. C++0x auto and C# var are purely notational conveniences that save us from having to spell it out in many cases, but the variable still has one fixed and static type.

Static and Dynamic Typing

As Jeff correctly noted in the above-quoted part, this isn’t dynamic typing, which permits the same variable to actually have different types at different points in its lifetime. Unfortunately, he goes on to say the following that could be mistaken by some readers to imply otherwise:

You might even say implicit variable typing is a gateway drug to more dynamically typed languages.

I know Jeff knows what he’s talking about because he said it correctly earlier in the same post, but let’s be clear: Inference doesn’t have anything to do with dynamic typing. Jeff is just noting that inference just happens to let you declare variables in a style that can be similar to the way you do it all the time in a dynamically typed language. (Before I could post this, I see that Lambda the Ultimate also commented on this confusion. At least one commenter noted that this could be equally viewed as a gateway drug to statically typed languages, because you can get the notational convenience without abandoning static typing.)

Quoting from Bjarne’s glossary:

dynamic type – the type of an object as determined at run-time; e.g. using dynamic_cast or typeid. Also known as most-derived type.

static type – the type of an object as known to the compiler based on its declaration. See also: dynamic type.

Let’s revisit an earlier C++ example again, which shows the difference between a variable’s static type and dynamic type:

WidgetBase* wb = new Widget();
if( dynamic_cast<Widget*>( wb ) ) { … }
// wb’s static type is WidgetBase*
// cast succeeds: wb’s dynamic type is Widget*

The static type of the variable says what interface it supports, so in this case wb allows you to access only the members of WidgetBase. The dynamic type of the variable is what the object being pointed to right now is.

In dynamically typed languages, however, variables don’t have a static type and you generally don’t have to mention the type. In many dynamic languages, you don’t even have to declare variables. For example:

// Python
x = 10;
x = “hello, world”;
// x’s type is int
// x’s type is str

Boost’s variant and any

There are two popular ways to get this effect in C++, even though the language remains statically typed. The first is Boost variant:

// C++ using Boost
variant< int, string > x;
x = 42;
x = “hello, world”;
x = new Widget();
// say what types are allowed
// now x holds an int
// now x holds a string
// error, not int or string

Unlike a union, a variant can include essentially any kind of type, but you have to say what the legal types are up front. You can even simulate getting overload resolution via boost::apply_visitor, which is checked statically (at compile time).

The second is Boost any:

// C++ using Boost
any x;
x = 42;
x = “hello, world”;
x = new Widget();

// now x holds an int
// now x holds a string
// now x holds a Widget*

Again unlike a union, an any can include essentially any kind of type. Unlike variant, however, any doesn’t make (or let) you say what the legal types are up front, which can be good or bad depending how relaxed you want your typing to be. Also, any doesn’t have a way to simulate overload resolution, and it always requires heap storage for the contained object.

Interestingly, this shows how C++ is well and firmly (and let’s not forget efficiently) on the path of Static Typing Where Possible, and Dynamic Typing When Needed.

Use variant when:

  • You want an object that holds a value of one of a specific set of types.
  • You want compile-time checked visitation.
  • You want the efficiency of stack-based storage where possible scheme (avoiding the overhead of dynamic allocation).
  • You can live with horrible error messages when you don’t type it exactly right.

Use any when:

  • You want the flexibility of having an object that can hold a value of virtually “any” type.
  • You want the flexibility of any_cast.
  • You want the no-throw exception safety guarantee for swap.

Stroustrup & Sutter on C++: The Interviews

While Bjarne and I were at SD for S&S, we took time out to do an interview together with Ted Neward for InformIT. I just got word that it went live… here are the links.

On the OnSoftware – Video (RSS):

On the OnSoftware – Audio (RSS):

Enjoy!

Memory Model talk at Gamefest 2008

I’ll be giving a memory model talk at Gamefest in Seattle next month. Here’s a quick summary:

Memory Models: Foundational Knowledge for Concurrent Code
July 22-23, 2008
Gamefest 2008
Seattle, WA, USA

A memory model defines a contract between the programmer and the execution environment, that trades off:

  • programmability via stronger guarantees for programmers, vs.
  • performance via greater flexibility for reordering program memory operations.

The “execution environment” includes everything from the compiler and optimizer on down to the CPU and cache hardware, and it really wants to help you by reordering your program to make it run faster. You, on the other hand, you want it to not help you excessively in ways that will break the meaning of your code. In this talk, we’ll consider why a memory model is important, how to achieve a reasonable balance, detailed considerations on current and future PC and Xbox platforms, and some best practices for writing solid concurrent code.

Where to find the state of ISO C++ evolution

After each ISO C++ meeting, I post a trip report update to my blog summarizing what’s new as of that meeting with a drill-down into some highlights. But wouldn’t it be handy to have an up-to-date summary scorecard with a snapshot of all proposals’ status to date? Indeed it would, and so today someone asked me in email:

I’m a software developer interested in forthcoming C++ standard. Are there any resources on the web where can I find
list of already accepted proposals as of the last meeting in Bellevue? I know that I can read the draft but I would like to have all new features in a list form.

Answer: Yes, thanks to the gracious volunteer efforts of Alisdair Meredith. Alisdair maintains the “State of C++ Evolution” paper, and posts updated versions before and after each ISO committee meeting. You can find the current one here:

For updates, just watch the committee papers pages where new batches of papers get posted every two or three months, including new versions of the evolution status paper and new updated working drafts of the next ISO C++ standard (maintained by our hardworking, and amazingly tireless, project editor Pete Becker).

Thanks, Alisdair and Pete!

Notes

1. Yes, Pete’s car has wheels. That’s not what I meant.

Quad-core a "waste of electricity"?

Jeff Atwood wrote:

In my opinion, quad-core CPUs are still a waste of electricity unless you’re putting them in a server. Four cores on the desktop is great for bragging rights and mathematical superiority (yep, 4 > 2), but those four cores provide almost no benchmarkable improvement in the type of applications most people use. Including software development tools.

Really? You must not be using the right tools. :-) For example, here are three I’m familiar with:

image Visual C++ 2008’s /MP flag tells the compiler to compile files in the same project in parallel. I typically get linear speedups on the compile phase. The link phase is still sequential, but on most projects compilation dominates.
imageSince Visual Studio 2005 we’ve supported parallel project builds in Batch Build mode, where you can build multiple subprojects in parallel (e.g., compile your release and debug builds in parallel), though that feature didn’t let you compile multiple files in the same project in parallel. (As I’ve blogged about before, Visual C++ 2005 actually already shipped with the /MP feature, but it was undocumented.)
image Excel 2007 does parallel recalculation. Assuming the spreadsheet is large and doesn’t just contain sequential dependencies between cells, it usually scales linearly up to at least 8 cores (the most I heard that was tested before shipping). I’m told that customers who are working on big financial spreadsheets love it.
imageAnd need I mention games? (This is just a snarky comment… Jeff already correctly noted that “rendering, encoding, or scientific applications” are often scalable today.)

And of course, even if you’re having a terrible day and not a single one of your applications can use more than one core, you can still see real improvement on CPU-intensive multi-application workloads on a multicore machine today, such as by being able to run other foreground applications at full speed while encoding a movie in the background.

Granted, as I’ve said before, we do need to see examples of manycore (e.g., >10 cores) exploiting mainstream applications (e.g., something your dad might use). But it’s overreaching to claim that there are no multicore (e.g., <10 cores) exploiting applications at all, not even development tools. We may not yet have achieved the mainstream manycore killer app, but it isn’t like we have nothing to show at all. We have started out on the road that will take us there.