Feeds:
Posts
Comments

Archive for the ‘Opinion & Editorial’ Category

Bjarne Stroustrup wrote the following a few minutes ago on the concepts mailing list:

Let me take this opportunity to remind people that

  • "being able to do something is not sufficient reason for doing it" and
  • "being able to do every trick is not a feature but a bug"

For the latter, remember Dijkstra’s famous "Goto considered harmful" paper. The point was not that the "new features" (loop constructs) could do every goto trick better/simpler, but that some of those tricks should be avoided to simplify good programming.

Concepts and concepts lite are meant to make good generic programming simpler. They are not meant to be a drop-in substitute for every metaprogramming and macroprogramming trick. If you are an expert, and if in your expert opinion you and your users really need those tricks, you can still use them, but we need to make many (most) uses of templates easier to get right, so that they can become more mainstream. That where concepts and concept lite fits in.

Some of you may find this hard to believe, but "back then" there was quite serious opposition to function declarations because "they restricted the way functions could be used and the way separate compilation could be used" and also serious opposition to virtual functions "because pointers to functions are so much more flexible." I see concepts lite (and concepts) in the same light as goto/for, unchecked-function-arguments/function-declarations, pointers-to-functions/abstract-classes.

Reminds me of the related antipattern: “Something must be done. This is something. Therefore we must do it!”

Read Full Post »

Jeff Atwood’s post two days ago inspired me to write this down. Thanks, Jeff.

“I can’t even remember the last time I was this excited about a computer.”

Jeff Atwood, November 1, 2012


Our industry is young again, full of the bliss and sense of wonder and promise of adventure that comes with youth.

Computing feels young and fresh in a way that it hasn’t felt for years, and that has only happened to this degree at two other times in its history. Many old-timers, including myself, have said “this feels like 1980 again.”

It does indeed. And the reason why is all about user interfaces (UI).

Wave 1: Late 1950s through 60s

First, computing felt young in the late 1950s through the 60s because it was young, and made computers personally available to a select few people. Having computers at all was new, and the ability to make a machine do things opened up a whole new world for a band of pioneers like Dijkstra and Hoare, and Russell (Spacewar!) and Engelbart (Mother of All Demos) who made these computers personal for at least a few people.

The machines were useful. But the excitement came from personally interacting with the machine.

Wave 2: Late 1970s through 80s

Second, computing felt young again in the late 1970s and 80s. Then, truly personal single-user computers were new. They opened up to a far wider audience the sense of wonder that came with having a computer of our very own, and often even with a colorful graphical interface to draw us into its new worlds. I’ll include Woods and Crowther (ADVENT) as an example, because they used a PDP as a personal computer (smile) and their game and many more like it took off on the earliest true PCs – Exidy Sorcerers and TRS-80s, Ataris and Apples. This was the second and much bigger wave of delivering computers we could personally interact with.

The machines were somewhat useful; people kept trying to justify paying $1,000 for one “to organize recipes.” (Really.) But the real reason people wanted them was that they were more intimate – the excitement once again came from personally interacting with the machine.

Non-wave: 1990s through mid-2000s

Although WIMP interfaces proliferated in the 1990s and did deliver benefits and usability, they were never as exciting to the degree computers were in the 80s. Why not? Because they weren’t nearly as transformative in making computers more personal, more fun. And then, to add insult to injury, once we shipped WIMPiness throughout the industry, we called it good for a decade and innovation in user interfaces stagnated.

I heard many people wonder whether computing was done, whether this was all there would be. Thanks, Apple, for once again taking the lead in proving them wrong.

Wave 3: Late 2000s through the 10s

Now, starting in the late 2000s and through the 10s, modern mobile computers are new and more personal than ever, and they’re just getting started. But what makes them so much more personal? There are three components of the new age of computing, and they’re all about UI (user interfaces)… count ’em:

  1. Touch.
  2. Speech.
  3. Gestures.

Now don’t get me wrong, these are in addition to keyboards and accurate pointing (mice, trackpads) and writing (pens), not instead of them. I don’t believe for a minute that keyboards and mice and pens are going away, because they’re incredibly useful – I agree with Joey Hess (HT to @codinghorror):

“If it doesn’t have a keyboard, I feel that my thoughts are being forced out through a straw.”

Nevertheless, touch, speech, and gestures are clearly important. Why? Because interacting with touch and speech and gestures is how we’re made, and that’s what lets these interactions power a new wave of making computers more personal. All three are coming to the mainstream in about that order…

Four predictions

… and all three aren’t done, they’re just getting started, and we can now see that at least the first two are inevitable. Consider:

Touchable screens on smartphones and tablets is just the beginning. Once we taste the ability to touch any screen, we immediately want and expect all screens to respond to touch. One year from now, when more people have had a taste of it, no one will question whether notebooks and monitors should respond to touch – though maybe a few will still question touch televisions. Two years from now, we’ll just assume that every screen should be touchable, and soon we’ll forget it was ever any other way. Anyone set on building non-touch mainstream screens of any size is on the wrong side of history.

Speech recognition on phones and in the living room is just the beginning. This week I recorded a podcast with Scott Hanselman which will air in another week or two, when Scott shared something he observed firsthand in his son: Once a child experiences saying “Xbox Pause,” he will expect all entertainment devices to respond to speech commands, and if they don’t they’re “broken.” Two years from now, speech will probably be the norm as one way to deliver primary commands. (Insert Scotty joke here.)

Likewise, gestures to control entertainment and games in the living room is just the beginning. Over the past year or two, when giving talks I’ve sometimes enjoyed messing with audiences by “changing” a PowerPoint slide by gesturing in the air in front of the screen while really changing the slide with the remote in my pocket. I immediately share the joke, of course, and we all have a laugh together, but the audience members more and more often just think it’s a new product and expect it to work. Gestures aren’t just for John Anderton any more.

Bringing touch and speech and gestures to all devices is a thrilling experience. They are just the beginning of the new wave that’s still growing. And this is the most personal wave so far.

This is an exciting and wonderful time to be part of our industry.

Computing is being reborn, again; we are young again.

Read Full Post »

I’m seeing many younger programmers picking up C++. The average age at C++ events over the past year has been declining rapidly as the audience sizes grow with more and younger people in addition to the C++ veterans.

But this one just beats all [Facebook link added]:

A six-year-old child from Bangladesh is hoping to be officially recognised as the world’s youngest computer programmer.

Wasik Farhan-Roopkotha showed an aptitude for computing at an early age and started typing in Microsoft Word at just three years old, BBC News reports.

The precocious youngster was programming game emulators from the age of four and his achievements have already received extensive media coverage in his home country.

He has also gained knowledge of C++, the programming language developed by Danish computer scientist Bjarne Stroustrup, without any formal training.

This kid seems pretty exceptional. Welcome, Wasik! I don’t expect the programs to be very complicated, and I’ll leave aside questions of balancing computer time with real-world time and exercise, but this is still quite an achievement.

How young were you when you wrote your first program? I discovered computers for the first time at age 11 when I switched to a new school that had a PET 2001, and wrote my first BASIC programs when was 11 or 12, first on paper and then a little on the PET when I could get access to it. I still fondly remember when I finally got my own Atari 800 when I was 13… thanks again for the loan for that, mum and dad! the first loan I ever took, and paid off in a year with paper-route money. Having that computer was definitely worth a year of predawn paper delivery in the rain, sleet, and snow.

Read Full Post »

In answering a reader question about Flash today, I linked to Adobe’s November press release and I commented:

Granted, Adobe says it’s abandoning Flash ‘only for new mobile device browsers while still supporting it for PC browsers.’ This is still a painful statement because [in part] … the distinction between mobile devices and PCs is quickly disappearing as of this year as PCs are becoming fully mobilized.

But what’s a “mobile device” vs. a “PC” as of 2012? Here’s a current data point, at least for me.

For almost two weeks now, my current primary machine has been a Slate 7 running Windows 8 Consumer Preview, and I’m extremely pleased with it. It’s a full Windows notebook (sans keyboard), and a full modern tablet. How do I slot it between “mobile device” and “PC,” exactly? Oh, and the desktop browser still supports Flash, but the tablet style browser doesn’t…

Since I’ve been using it (and am using it to write this post), let me write a mini-review.

I loved my iPad, and still do, and so I was surprised how quickly I came to love this snappy device even more. Here are a few thoughts, in rough order from least to most important:

  • It has a few nice touches that I miss on iOS, like task switching by simple swipe-from-left (much easier than double-clicking the home button and swiping, and my iPhone home button is started to get unreliable with all the double-clicking [ETA: and I never got used to four-finger swiping probably in part because it isn't useful on the iPhone]), having a second app open as a sidebar (which greatly relieves the aforementioned back-and-forth task-switching I find myself doing on iOS to refer to two apps), and some little things like including left- and right-cursor keys on the on-screen keyboard (compared to iOS’s touch-and-hold to position the cursor by finger using the magnification loupe). In general, the on-screen keyboard is not only unspeakably better than Win7’s attempt, but even slightly nicer than iPad’s as I find myself not having to switch keyboards as much to get at common punctuation symbols.
  • I was happily surprised to find that some of my key web-related apps like Live Writer came already installed.
  • The App Store, which isn’t even live yet, already had many of my major apps including Kindle, USA Today, and Cut the Rope. Most seem very reliable; a few marked “App Preview” are definitely beta quality at the moment though. The Kindle app is solid and has everything I expected, except for one complaint: It should really go to a two-column layout in landscape mode like it does on iPad, especially given the wider screen. Still, the non-“preview” apps do work, and the experience and content is surprisingly nice for a not-officially-open App Store.
  • Real pen+ink support. This is a Big Deal, as I said two years ago. Yes, I’ve tried several iPad pens and apps for sort-of-writing notes, and no, iOS has nothing comparable here; the best I can say for the very best of them is that they’re like using crayons. Be sure to try real “ink” before claiming otherwise – if you haven’t, you don’t know what you’re missing. iPad does have other good non-pen annotation apps, and I’ve enjoyed using iAnnotate PDF extensively to read and annotate almost half of Andrei’s D book. But for reading articles and papers I just really, really miss pen+ink.
  • All my software just works, from compilers and editors to desktop apps for full Office and other work.
  • Therefore, finally, I get my desktop environment and my modern tablet environment without carrying two devices. My entire environment, from apps to files, is always there without syncing between notebook and tablet devices, and I can finally eliminate a device. I expected I would do that this year, but I’m pleasantly surprised to be able to do it for real already this early in the year with a beta OS and beta app store.

I didn’t expect to switch over to it this quickly, but within a few days of getting it I just easily switched to reading my current book-in-progress on this device while traveling (thanks to the Kindle app), reading and pen-annotating a couple of research papers on lock-free coding techniques (it’s by far my favorite OneNote device ever thanks to having both great touch and great pen+ink and light weight so I can just write), and using it both as a notebook and as a tablet without having to switch devices (just docking when I’m at my desk and using the usual large monitors and my favorite keyboard+mouse, or holding it and using touch+pen only). It already feels like a dream and very familiar both ways. I’m pretty sure I’ll never go back to a traditional clamshell notebook, ever.

Interestingly, as a side benefit, even the desktop apps are often very, and more, usable when in pure tablet+touch mode than before despite the apparently-small targets. Those small targets do sometimes matter, and I occasionally reach for my pen when using those on my lap. But I’ve found in practice they often don’t matter at all when you swipe to scroll a large region – I was surprised to find myself happily using Outlook in touch-only mode. In particular, it’s my favorite OneNote device ever.

By the end of this week when I install a couple of more apps, including the rest of my test C++ compilers, it will have fully replaced my previous notebook and my previous tablet, with roughly equal price and power as the former alone (4GB RAM, 128GB SSD + Micro SD slot, Intel Core i5-2467M) and roughly equal weight and touch friendliness as the latter alone (1.98lb vs. 1.44lb). Dear Windows team, my back thanks you.

So, then, returning to the point – in our very near future, how much sense does it really make to distinguish between browsers for “mobile devices” and “PCs,” anyway? Convergence is already upon us and is only accelerating.

Read Full Post »

David Braun asked:

@Tom @Herb: What’s so wrong with flash that it should be boycotted? Have I been being abused by it in some way I’m not aware of? Also,does HTML5 have any bearing on the subject?

I’m not saying it should be boycotted, only that I avoid it. Here’s what I wrote two years ago: “Flash In the Pan”.  Besides security issues and crashing a lot, Flash is a headache for servicing and seems to be architecturally unsuited for lower-power environments.

Since then, two more major developments:

1. Even Adobe has given ground (if not given up).

Adobe subsequently abandoned Flash for mobile browsers and started shipping straight-to-HTML5 tools.

Granted, Adobe says it’s abandoning Flash ‘only for new mobile device browsers while still supporting it for PC browsers.’ This is still a painful statement because:

  • it’s obvious that ceding such high-profile and hard-fought ground sends a message about overall direction; and
  • the distinction between mobile devices and PCs is quickly disappearing as of this year as PCs are becoming fully mobilized (more on this in my next blog post).

2. We’re moving toward plugin-avoiding browsing.

Browsers are increasingly moving to reduce plugins, or eliminate them outright, for security/reliability/servicing reasons. Moving in that direction crease pressure or necessity to either:

I’m not saying Flash will die off immediately or necessarily even die off entirely at all; there’s a lot of inertia, it’s still useful in many kinds of devices, and it may well hang on for some time. But its architectural problems and current trajectory are fairly clear, and it’s been months since I’ve heard someone complain that certain people were just being unfair – Jobs’ technical points are on the right side of history.

Read Full Post »

With so much happening in the computing world, now seemed like the right time to write “Welcome to the Jungle” – a sequel to my earlier “The Free Lunch Is Over” essay. Here’s the introduction:

 

Welcome to the Jungle

In the twilight of Moore’s Law, the transitions to multicore processors, GPU computing, and HaaS cloud computing are not separate trends, but aspects of a single trend – mainstream computers from desktops to ‘smartphones’ are being permanently transformed into heterogeneous supercomputer clusters. Henceforth, a single compute-intensive application will need to harness different kinds of cores, in immense numbers, to get its job done.

The free lunch is over. Now welcome to the hardware jungle.

 

From 1975 to 2005, our industry accomplished a phenomenal mission: In 30 years, we put a personal computer on every desk, in every home, and in every pocket.

In 2005, however, mainstream computing hit a wall. In “The Free Lunch Is Over” (December 2004), I described the reasons for the then-upcoming industry transition from single-core to multi-core CPUs in mainstream machines, why it would require changes throughout the software stack from operating systems to languages to tools, and why it would permanently affect the way we as software developers have to write our code if we want our applications to continue exploiting Moore’s transistor dividend.

In 2005, our industry undertook a new mission: to put a personal parallel supercomputer on every desk, in every home, and in every pocket. 2011 was special: it’s the year that we completed the transition to parallel computing in all mainstream form factors, with the arrival of multicore tablets (e.g., iPad 2, Playbook, Kindle Fire, Nook Tablet) and smartphones (e.g., Galaxy S II, Droid X2, iPhone 4S). 2012 will see us continue to build out multicore with mainstream quad- and eight-core tablets (as Windows 8 brings a modern tablet experience to x86 as well as ARM), image_thumb99and the last single-core gaming console holdout will go multicore (as Nintendo’s Wii U replaces Wii).

This time it took us just six years to deliver mainstream parallel computing in all popular form factors. And we know the transition to multicore is permanent, because multicore delivers compute performance that single-core cannot and there will always be mainstream applications that run better on a multi-core machine. There’s no going back.

For the first time in the history of computing, mainstream hardware is no longer a single-processor von Neumann machine, and never will be again.

That was the first act.  . . .

 

I hope you enjoy it.

Read Full Post »

image
I don’t normally blog poetry, but the passing of our giants this past month has put me in such a mood.

.

What is built becomes our future
Hand-constructed, stone by stone
Quarried by our elders’ labors
Fashioned with their strength and bone
Dare to dream, and dare to conquer
Fears by building castles grand
But ne’er forget, and e’er remember
To take a new step we must stand
On the shoulders of our giants
Who, seeing off into the morrow,
Made the dreams of past turn truth –
How their passing is our sorrow.

Read Full Post »

image

What a sad, horrible month. First Steve Jobs, then Dennis Ritchie, and now John McCarthy. We are losing many of the greats all at once.

If you haven’t heard of John McCarthy, you’re probably learning about his many important contributions now. Some examples:

  • He’s the inventor of Lisp, the second-oldest high-level programming language, younger than Fortran by just one year. Lisp is one of the most influential programming languages in history. Granted, however, most programmers don’t use directly Lisp-based languages, so its great influence has been mostly indirect.
  • He coined the term “artificial intelligence.” Granted, however, AI has got a bad rap from being oversold by enthusiasts like Minsky; for the past 20 years or so it’s been safer to talk in euphemisms like “expert systems.” So here too McCarthy’s great influence has been less direct.
  • He developed the idea of time-sharing, the first step toward multitasking. Okay, now we’re talking about a contribution that’s pretty directly influential to our modern systems and lives.

But perhaps McCarthy’s most important single contribution to modern computer science is still something else, yet another major technology you won’t hear nearly enough about as being his invention:

Automatic garbage collection. Which he invented circa 1959.

No, really, that’s not a typo: 1959. For context, that year’s first quarter alone saw the beginning of the space age as Sputnik 1 came down at the end of its three-month orbit; Fidel Castro take Cuba; Walt Disney release Sleeping Beauty; The Day the Music Died; the first Barbie doll; and President Eisenhower signing a bill to enable Hawaii to become a state.

GC is ancient. Electronic computers with core memory were still something of a novelty (RAM didn’t show up until a decade or so later), machine memory was measured in scant kilobytes, and McCarthy was already managing those tiny memories with automatic garbage collection.

I’ve encountered people who think GC was invented by Java in 1995. It was actually invented more than half a century ago, when our industry barely even existed.

Thanks, John.

And here’s hoping we can take a break for a while from writing these memorials to our giants.

Read Full Post »

Ritchie, Stroustrup, and Gosling

Dennis Ritchie gave very few interviews, but I was lucky enough to be able to get one of them.

Back in 2000, when I was editor of C++ Report, I interviewed the creators of C, C++, and Java all together:

The C Family of Languages: Interview with Dennis Ritchie, Bjarne Stroustrup, and James Gosling

This article appeared in Java Report, 5(7), July 2000 and C++ Report, 12(7), July/August 2000.

Their extensive comments — on everything from language history and design (of course) and industry context and crystal-ball prognostication, to personal preferences and war stories and the first code they ever wrote — are well worth re-reading and remarkably current now, some 11 years on.

As far as I know, it’s the only time these three have spoken together. It’s also the only time a feature article ran simultaneously in both C++ Report and Java Report.

Grab a cup of coffee, fire up your tablet, and enjoy.

Read Full Post »

dmr (Dennis Ritchie)

What a sad week.

Rob Pike reports that Dennis Ritchie also has passed away. Ritchie was one of the pioneers of computer science, and a well-deserved Turing winner for his many contributions, notably the creation of C — by far the most influential programming language in history, and still going strong today.

Aside: Speaking of “still going strong,” this is a landmark week for the ISO Standard C Programming Language as well. Just a couple of days ago, the new C standard passed what turned out to be its final ballot,[*] and so we now have the new ISO C11 standard. C11 includes a number of new features that parallel those in C++11, notably a memory model and a threads/mutexes/atomics concurrency library that is tightly aligned with C++11. The new C standard should be published by ISO in the coming weeks.

[*] ISO rules are that if you pass the penultimate ballot with unanimous international support, you get to skip the formality of the final ballot and proceed directly to publication.

Bjarne Stroustrup made an eloquent point about the importance of Ritchie’s contributions to our field: “They said it couldn’t be done, and he did it.”

Here’s what Bjarne meant:

Before C, there was far more hardware diversity than we see in the industry today. Computers proudly sported not just deliciously different and offbeat instruction sets, but varied wildly in almost everything, right down to even things as fundamental as character bit widths (8 bits per byte doesn’t suit you? how about 9? or 7? or how about sometimes 6 and sometimes 12?) and memory addressing (don’t like 16-bit pointers? how about 18-bit pointers, and oh by the way those aren’t pointers to bytes, they’re pointers to words?).

There was no such thing as a general-purpose program that was both portable across a variety of hardware and also efficient enough to compete with custom code written for just that hardware. Fortran did okay for array-oriented number-crunching code, but nobody could do it for general-purpose code such as what you’d use to build just about anything down to, oh, say, an operating system.

So this young upstart whippersnapper comes along and decides to try to specify a language that will let people write programs that are: (a) high-level, with structures and functions; (b) portable to just about any kind of hardware; and (c) efficient on that hardware so that they’re competitive with handcrafted nonportable custom assembler code on that hardware. A high-level, portable, efficient systems programming language.

How silly. Everyone knew it couldn’t be done.

C is a poster child for why it’s essential to keep those people who know a thing can’t be done from bothering the people who are doing it. (And keep them out of the way while the same inventors, being anything but lazy and always in search of new problems to conquer, go on to use the world’s first portable and efficient programming language to build the world’s first portable operating system, not knowing that was impossible too.)

Thanks, Dennis.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 2,103 other followers