Plug for the Astoria Seminar

If you enjoy C++, you should also enjoy the Astoria Seminar: Extraordinary C++ to be held on September 23-26, 2007 in Astoria, OR, USA. The event seems to have plenty of seats available, so you haven’t missed out (yet).

Disclaimer: I have no affiliation with the event, and I won’t be speaking or attending. But I feel confident recommending it on the strength of its speakers, if you are interested in any of the topics. Summarizing both:

  • Scott Meyers
    • An Overview of TR1
    • C++ Callbacks for C APIs
  • Andrei Alexandrescu
    • Choose your Poison: Exceptions or Error Codes?
    • Memory Allocation: Either Love It or Hate It. (Or Just Think It’s OK.)
  • Dave Abrahams
    • High Performance Generic Software Design (with Eric Niebler)
    • C++ Metaprogramming Concepts and Frameworks
  • Eric Niebler
    • High Performance Generic Software Design (with Dave Abrahams)
    • Domain-Specific Embedded Language Design with Boost.Proto
  • Walter Bright
    • Building Fast Lexers and Parsers
    • Performance Tuning Your Application

If any of the above topics gave a little tug at your heartstrings, or you want to rub shoulders with (and pick brains of) some really knowledgeable experts, I’m sure you won’t want to miss it. With Dave and Eric there (and Andrei, even though he claims to be speaking about other topics) it’s guaranteed to have a heavy dose of template- and metaprogramming-oriented material.

P.S.: Walter, thanks again for Empire! It was the cause of some happily wasted hours in my undergrad years.

More Visual C++ Q&A: A New Compiler Front-End

A few days ago, I blogged to answer to someone’s question about "where is Visual C++ going" by giving a number of resources and links, primarily about this year’s release now in public beta.

But their real question was about "where is Visual C++ going beyond VC++ 2008," and in particular whether we will continue to invest in the C++ compilers and native libraries, support new native and managed facilities like XAML via C++, and so on.

Now I can point to an answer that’s slightly more complete but also breaks some news: Our VP, Soma, blogged about this very topic this week. Here’s the key paragraph, some emphasis added:

This team will be significantly increasing support for native development tools. Central to this work is investigating ways to make C++ developers far more efficient in understanding, updating and validating the quality of large native code bases. In fact, the team is already working on a front-end rewrite that will allow for a vastly improved IntelliSense experience. This same work should pave the way for future enhancements such as refactoring and advanced source code analysis. In addition, the team intends to update the native libraries to simplify the development of rich-client user interfaces and access to underlying Windows platform innovation. The team will also work to provide “friction-free” interop between native & managed code through enhancements to C++/CLI and IJW.

From the comments section, Bill Dunlap from our VC++ group added some teasers, including (emphasis mine):

MFC – we are working on a huge update to MFC that should knock your socks off.  I can’t tell you too much right now, but this is closer than you might [think] <g>.

The Visual C++ "front-end rewrite" Soma mentions is indeed well under way, and one of the big benefits will be that it will make the compiler much more flexible (dare I say "agile"?) for us to work with and add features, including C++0x features. We’ll be able to say more about the new compiler and report progress in the coming year or two as it turns into real product you can see and touch.

[8:55am: updated to add Bill Dunlap’s comments and edit the last paragraph]

A Not-So-Innocent Diversion: Trial By Media

Breathless celebrity "news" reporting isn’t enough… apparently we also need a direct invitation to trial by media. From McNews:

image

This is a perilous question in deceptively casual clothing. There should be nothing "Quick" about a question involving children’s lives. (Spears’ name is immaterial — one could as well replace "Britney Spears" with "Anyone’s Name," including yours or mine.)

It’s refreshing to see that McNews agrees we all have the right to enjoy a presumption of innocence. And that they would never encourage Joe Public to casually pass severe judgment on someone they don’t personally know, in absentia, based entirely on hearsay (especially hearsay reported by media with a selection bias favoring sensational value), with a passing click of the mouse.

Removing children can be necessary, but it is a serious matter and so should be considered seriously through careful firsthand examination [*], not treated lightly as worthy of drive-by opinion-poll jurying. Perhaps I’m overly sensitive; I know of kids wrongly taken from their parents under outrageous pretexts, and the results can be tragic.

I understand the intent was just to throw out a quick question, and not to take it too seriously. That’s the problem. But then again, maybe making something profoundly important appear inconsequential is just playing the B side of celebrity "news" that tries to make the inconsequential appear profound.

Footnotes

[*] Even with careful deliberation and due process, staggering mistakes can be and are made in serious matters. I’m currently reading John Grisham’s first nonfiction (and somewhat polemic) book The Innocent Man, the tragic account of Ron Williamson, who was wrongly sentenced to death [PBS] [**] for a murder he did not commit. According to Grisham’s account, Williamson was publicly hounded in a similar trial by media in the local newspaper that influenced community (and juror pool) opinion for years before he was ever charged with anything in that crime. Disclaimer: Of course, Grisham’s book too is a media voice, and his account of the case is itself charged with one-sided reporting. Nobody seems to dispute that the conviction was unjust, however, though people do disagree on where to lay blame.

[**] No, I’m not expressing an opinion on the death penalty.

Visual C++ Q&A

In a comment on another blog entry, "Dev" asked:

When are the Visual C++ team going to make some big announcements on the new C++ features coming after Visual C++ 2008. I know you are very active in the new C++0x world, but people are really worried about what MS is doing or not doing with VC++ on windows, and the VC team seems to be ever shrinking to the point people are questioning if C++ on windows is even going to be a lead product for all the things we would like to use it for, but MS has been unwilling/able to tool us up to do, from XAML to web services. We could really use you speaking up now for our needs, and MS’s product plan to help with our unrest. Can you announce something soon?

This question actually has two parts. Breaking it down:

Q1: Is VC++ doing major work to keep VC++ a lead product and support XAML and other cool new stuff?

Yes, there are significant announced and unannounced things in the pipeline. Here’s a link to one good Channel 9 video by two of our senior VC++-ers, and its associated blurb:

Steve Teixeira and Bill Dunlap: Visual C++ Today and Tomorrow

How will VC++ evolve? How has the advent of managed code affected the evolutionary trajectory of VC++? What’s the VC++ team up to these days, anyway? How much time are they spending innovating C++, the native language? Tune in and learn first hand from two people who know the answers to the above questions (and much more); Steve Teixeira, Group Program Manager, and Bill Dunlap, Program Manager. If you want to know where Visual C++ is heading, then you definitely want to watch this interview. If you are a C++ developer, the message should be very loud and clear: Microsoft has not forgotten about you!

Q2: How about some news and announcements?

Dev asked about post-VC++ 2008 news, but let me point out VC++ 2008-related news too.

A great place to subscribe for news like this is the Visual C++ team blog. Here are some recent highlights that relate to the above:

July 27, 2007 Visual Studio 2008 (Orcas) Beta 2 Now Available. This includes a public beta of our next release, Visual C++ 2008.
May 31, 2007 MSDN article: “An Inside Look At The Next Generation Of Visual C++”. Covers some Visual C++ 2008 features, including enhancements to MFC, targeting Vista, and managed/native interop.
April 10, 2007 Visual C++ Orcas Feature Specifications online. Enjoy! Many of you will no doubt like /MP — parallel builds.

(I probably shouldn’t say this, but I just can’t help myself: /MP is actually already there in VC++ 2007, and I use it all the time myself — on a 2-core machine the compile phase goes nearly 2x faster, but there’s not much difference in the link phase. We just didn’t document it because we didn’t have time to fit-and-finish it, and if you try it you will find rough edges. For example, you may see oddly interleaved build messages in the output window, and if you Ctrl-Break a build you may need to do a full rebuild next time. You Have Been Warned. But have fun experimenting, and just don’t tell anyone I told you…)

Quoting from the last article, here are some more resources:

There are also some channel 9 videos on some of these new features (and other aspects of the work we are currently doing):

What about post-VC++ 2008 news? There will be other announcements in the coming months, because we have major stuff in the pipeline that may eclipse (no pun intended) even the above cool features. As those start to make their way out into the public eye, I hope you’ll find them as refreshing as I do.

Effective Concurrency: How Much Scalability Do You Have or Need?

The second Effective Concurrency column in DDJ just went live. It’s titled "How Much Scalability Do You Have or Need?" and makes the case that there’s more than just one important category of throughput scalability, and one size does not fit all. From the article:
In your application, how many independent pieces of work are ready to run at any given time? Put another way, how many cores (or hardware threads, or nodes) can your code harness to get its answers faster? And when should the answers to these questions not be "as many as possible"?
I hope you enjoy it.
Next month’s article is already in post-production. It will be titled "Use Critical Regions (Preferably Locks) to Eliminate Races" and will hit the web about a month from now. One of the early questions it answers is, How bad can a race be? There’s a hint in the article’s tagline: "In a race no one can hear you scream…"
Finally, here are some links to last month’s Effective Concurrency column and to a prior locking article of interest that provides a nice background motivation for the next few EC articles to come starting next month:

The Pit and the Pendulum

Don’t fall into the pit of thinking there’s no pendulum, or that the pendulum can be nailed to one side.

Earlier today, Michael Swaine wrote an article commenting on the "trend" of Google Gears, Adobe AIR, and Microsoft Silverlight. Here’s the opening blurb and intro paragraph:

Return of the Desktop

Is the rediscovery of the desktop just the latest swing of some tech-trend pendulum, or is there something more going on here?

This year, some of the big boys gave every impression of having suddenly and simultaneously remembered that there is such a thing as a desktop. Google got Geared up, Adobe announced AIR, and Microsoft saw the light with Silverlight, all of which are tools to help web developers integrate operations on the Web and the desktop just a little better. That oft-repeated mantra that the web browser is the new operating system? In 2007, not so much.

Of course it’s a pendulum. More specifically, it’s the industry constantly rebalancing the mix of several key technology factors, notably:

  • computation capacity available on the edge (from motes and phones through to laptops and desktops) and in the center (from small and large servers through to local and global datacenters)
  • communication bandwidth, latency, cost, availability, and reliability

This balancing actually isn’t news; we’ve been doing it since the dawn of computing. Conceptually, it’s not much different from how the designers of your PC balanced the kind and speed of memory to match the speed of the processor and the bus and the hard drive etc. to create a balanced system. We do and redo this exercise all the time. Here are just a few of the pendulum swings we’ve seen historically:

Era/Epoch The Center The Edge
Precambrian ENIAC
Cambrian Walk-up mainframes
Devonian Terminals and time-sharing
Permian Minicomputers
Triassic Microcomputers, personal computers
Jurassic File and print servers
Cretaceous Client/Server, server tier Client/Server, middle tier
Paleocene PDA
Eocene Web servers
Oligocene ActiveX, JavaScript
PDA phone
Miocene E-tailers
Pliocene Flash, AJAX
Pleistocene Web services
Data centers
Holocene Google Gears
Adobe AIR
Silverlight

How many pendulum swings can you count on just that list? In my own career, I’ve missed only the Precambrian and Cambrian (I’m a child of terminals and micros, and never had to carry stacks of punched cards uphill both ways in snow up to my waist). Many of you have experienced most of these swings.

It’s also not news that neither the center nor the edge is going to go away. We’re in an expanding computing universe: The question is not whether one will replace the other, but what balance they will be in at a given point. This will continue to be true for the foreseeable future no matter how often people on either end of the pendulum swing try to nail the pendulum where they want it for their own business reasons. (Take it from someone who lived through trying to market early peer-to-peer database and application models in the midst of Larry Ellison’s screaming-loud "network computer" hype, and had to deal with VC after VC who believed desktops and notebooks were going to evaporate. Sigh.)

The Computing Pendulum (slide from Craig Mundie's talk)What is news, of course, is how those factors are changing and therefore how their balance is changing. Craig Mundie has spoken about this pendulum in several talks this year, including last week’s Financial Analysts Meeting (transcript and WMP webcast link; slides, including the one reproduced at right).

Quoting from one of those talks:

One of the things that I also find fascinating at this point in time is how people, how easily we forget about the cyclical nature of the evolution of the computing paradigm.

And from another:

Right now, as the Internet has evolved, broadband has become more highly penetrated, and to some extent the computers seems to be not fully utilized, we’re in the middle of one of these natural pendulum like swings between centralized computing and computing at the edge. It started with the mainframe, and then we added terminals, and then we moved to departmental, and then we moved to personal; it just kind of moves back and forth. And there are a lot of people today who say, oh, you know, I think that in the future we’ll just have dumb presentation devices again, and we’ll do all the computing in the cloud.

But … I contend that since the cloud is made ultimately from the same microprocessors, as the utilization becomes higher, it becomes impractical for a whole variety of costs and latency reasons to think you can just push everything up the wire into some centralized computing utility.

And so, in fact, I think for the first time in a long time we’re going to see the pendulum come into a fairly balanced position where we, in fact, do have incredible power plants of the Internet in these huge datacenters that provide these integrating services across the network, but at the same time we’re going to see increasingly powerful local personal computing facilities in everything from embedded devices, cell phones, and on up the computing spectrum.

A nicely balanced view. The center (mainframes, datacenters) isn’t going away anytime soon. But neither is the edge (PDAs, laptops). It would obviously be foolish to imagine either away, at least yet, because they each have different capability, availability, performance, and reliability characteristics, so there’s plenty of reason to choose each one for a different part of an application or system.

Don’t fall into the pit of assuming the pendulum will get nailed to one side. That’s pretty unlikely. Bet on new technologies constantly being developed to bring the center and the edge into new balance by filling the holes where each is deficient and as the center and edge grow at different rates. Yesterday’s disconnected computers just couldn’t do everything you can on an Internet — so as internetworks became mainstream something like HTML and AJAX had to come to let us exploit them. Early and current web apps just can’t do everything you can on a rich client — hence first AJAX, then Gears, AIR, and Silverlight, with more still to come tomorrow and next year and next decade.

Fasten your seat belts.