Feeds:
Posts
Comments

Archive for the ‘Java’ Category

I don’t often link to other articles, but this one is worth reading.

Why mobile web apps are slow

by Drew Crawford

… So if you are trying to figure out exactly what brand of crazy all your native developer friends are on for continuing to write the evil native applications on the cusp of the open web revolution, or whatever, then bookmark this page, make yourself a cup of coffee, clear an afternoon, find a comfy chair, and then we’ll both be ready.

He offers data (imagine!) to justly debunk many common memes and “easy answers” that routinely litter HN/Reddit/Slashdot comment threads. The piece is also often subtly (and intentionally) hilarious – do watch for the subtle humor, not just the obvious wit.

Don’t be distracted by the author’s viewpoint and emphasis on “iOS and Javascript” development – the article covers lots of important ground, including:

  • developing for ARM vs. x86;
  • developing for desktop vs. mobile;
  • managed vs. native code performance;
  • JIT issues vs. inherent language design tensions;
  • why garbage collection is not at all the panacea it’s often billed to be and often needs to be emphatically avoided (did you realize Apple already jettisoned GC?); and
  • as many of you know already, why if you’re serious about performance you’ll be seriously serious about memory usage and access patterns as a first-order issue.

I agree with most of it, and not just because he quotes from my When Will Better JITs Save Managed Code blog post from last year.

Recommended.

Spoilers

A few takeaways from the conclusion (spoiler alert):

Garbage collection is exponentially bad in a memory-constrained environment. It is way, way worse than it is in desktop-class or server-class environments.

Every competent mobile developer, whether they use a GCed environment or not, spends a great deal of time thinking about the memory performance of the target device

JavaScript, as it currently exists, is fundamentally opposed to even allowing developers to think about the memory performance of the target device

If they did change their minds and allowed developers to think about memory, experience suggests this is a technically hard problem.

asm.js show some promise, but even if they win you will be using C/C++ or similar “backwards” language as a frontend, rather than something dynamic like JavaScript

Read Full Post »

With the help of friends Robert Seacord and David Svoboda of CERT in particular, I posted a note and link to their CERT post today because people have been misunderstanding the recent Java vulnerabilities, thinking they’re somehow really C or C++ vulnerabilities because Java is implemented in C and C++.

From the post:

Are the Java vulnerabilities actually C and C++ vulnerabilities?

by Herb Sutter

 

You’ve probably seen the headlines:

[US-CERT] Java in Web Browser: Disable Now!

We’ve been telling people to disable Java for years. … We have confirmed that VU#625617 can be used to reliably execute code on Windows, OS X, and Linux platforms. And the exploit code for the vulnerability is publicly available and already incorporated into exploit kits. This should be enough motivation for you to turn Java off.

Firefox and Apple have blocked Java while U.S. Homeland Security recommends everyone disable it, because of vulnerabilities
Homeland Security still advises disabling Java, even after update

Some people have asked whether last week’s and similar recent Java vulnerabilities are actually C or C++ vulnerabilities – because, like virtually all modern systems software, Java is implemented in C and C++.

The answer is no, these particular exploits are pure Java. Some other exploits have indeed used vulnerabilities in Java’s native C code implementation, but the major vulnerabilities in the news lately are in Java itself, and they enable portable exploits on any operating system with a single program. …

Some other C++ experts who have better sense than I do won’t add the following bit publicly, but I can’t help myself: Insert “write once, pwn everywhere” joke here…

Read Full Post »

In Welcome to the Jungle, I predicted that “weak” hardware memory models will disappear. This is true, and it’s happening before our eyes:

  • x86 has always been considered a “strong” hardware memory model that supports sequentially consistent atomics efficiently.
  • The other major architecture, ARM, recently announced that they are now adding strong memory ordering in ARMv8 with the new sequentially consistent ldra and strl instructions, as I predicted they would. (Actually, Hans Boehm and I influenced ARM in this direction, so it was an ever-so-slightly disingenuous prediction…)

However, at least two people have been confused by what I meant by “weak” hardware memory models, so let me clarify what “weak” means – it means something different for hardware memory models and software memory models, so perhaps those aren’t the clearest terms to use.

By “weak (hardware) memory model” CPUs I mean specifically ones that do not natively support efficient sequentially consistent (SC) atomics, because on the software side programming languages have converged on “sequential consistency for data-race-free programs” (SC-DRF, roughly aka DRF0 or RCsc) as the default (C11, C++11) or only (Java 5+) supported software memory model for software. POWER and ARMv7 notoriously do not support SC atomics efficiently.

Hardware that supports only hardware memory models weaker than SC-DRF, meaning that they do not support SC-DRF efficiently, are permanently disadvantaged and will either become stronger or atrophy. As I mentioned specifically in the article, the two main current hardware architectures with what I called “weak” memory models were current ARM (ARMv7) and POWER:

  • ARM recently announced ARMv8 which, as I predicted, is upgrading to SC acquire/release by adding new SC acquire/release instructions ldra and strl that are mandatory in both 32-bit and 64-bit mode. In fact, this is something of an industry first — ARMv8 is the first major CPU architecture to support SC acquire/release instructions directly like this. (Note: That’s for CPUs, but the roadmap for ARM GPUs is similar. ARM GPUs currently have a stronger memory model, namely fully SC; ARM has announced their GPU future roadmap has the GPUs fully coherent with the CPUs, and will likely add “SC load acquire” and “SC store release” to GPUs as well.)
  • It remains to be seen whether POWER will adapt similarly, or die out.

Note that I’ve seen some people call x86 “weak”, but x86 has always been the poster child for a strong (hardware) memory model in all of our software memory model discussions for Java, C, and C++ during the 2000s. Therefore perhaps “weak” and “strong” are not useful terms if they mean different things to some people, and I’ve updated the WttJ text to make this clearer.

I will be discussing this in detail in my atomic<> Weapons talk at C&B next week, which I hope to make freely available online in the near future (as I do most of my talks). I’ll post a link on this blog when I can make it available online.

Read Full Post »

Ritchie, Stroustrup, and Gosling

Dennis Ritchie gave very few interviews, but I was lucky enough to be able to get one of them.

Back in 2000, when I was editor of C++ Report, I interviewed the creators of C, C++, and Java all together:

The C Family of Languages: Interview with Dennis Ritchie, Bjarne Stroustrup, and James Gosling

This article appeared in Java Report, 5(7), July 2000 and C++ Report, 12(7), July/August 2000.

Their extensive comments — on everything from language history and design (of course) and industry context and crystal-ball prognostication, to personal preferences and war stories and the first code they ever wrote — are well worth re-reading and remarkably current now, some 11 years on.

As far as I know, it’s the only time these three have spoken together. It’s also the only time a feature article ran simultaneously in both C++ Report and Java Report.

Grab a cup of coffee, fire up your tablet, and enjoy.

Read Full Post »

Why C++

Channel 9 has just posted a recording of my intro talk at C++ and Beyond 2011 last month in Banff. Here’s the link: C++ and Beyond 2011: Why C++.

It’s a keynote-y talk, not a technical talk, but we felt it was important to address an important trend involving the language. The goal is to share a perspective and rationale for why of late there’s such a resurgence of interest in C++ — both across the industry, and within Microsoft.

Whether or not you agree with the perspective and rationale, I hope you enjoy it!

Read Full Post »

Over the holidays, Erik Meijer interviewed me on Channel 9. We covered a wide variety of topics, mostly centered on C++ with some straying into C#/Java/Haskell/Clojure/Erlang, but ranging from auto and closures to why (not?) derive future<T> from T, and from what the two most important problems in parallelism are in 2011 to why and how to taste new programming languages regularly. I think it turned out well. Enjoy!

Read Full Post »

This month’s Effective Concurrency column, “volatile vs. volatile”, is now live on DDJ’s website and also appears in the print magazine. (As a historical note, it’s DDJ’s final print issue, as I mentioned previously.)

This article aims to answer the frequently asked question: “What does volatile mean?” The short answer: “It depends, do you mean Java/.NET volatile or C/C++ volatile?” From the article:

What does the volatile keyword mean? How should you use it? Confusingly, there are two common answers, because depending on the language you use volatile supports one or the other of two different programming techniques: lock-free programming, and dealing with ‘unusual’ memory.

Adding to the confusion, these two different uses have overlapping requirements and impose overlapping restrictions, which makes them appear more similar than they are. Let’s define and understand them clearly, and see how to spell them correctly in C, C++, Java and C# — and not always as volatile. …

I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns and the DDJ print magazine issue in which they first appeared:

The Pillars of Concurrency (Aug 2007)

How Much Scalability Do You Have or Need? (Sep 2007)

Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)

Apply Critical Sections Consistently (Nov 2007)

Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)

Use Lock Hierarchies to Avoid Deadlock (Jan 2008)

Break Amdahl’s Law! (Feb 2008)

Going Superlinear (Mar 2008)

Super Linearity and the Bigger Machine (Apr 2008)

Interrupt Politely (May 2008)

Maximize Locality, Minimize Contention (Jun 2008)

Choose Concurrency-Friendly Data Structures (Jul 2008)

The Many Faces of Deadlock (Aug 2008)

Lock-Free Code: A False Sense of Security (Sep 2008)

Writing Lock-Free Code: A Corrected Queue (Oct 2008)

Writing a Generalized Concurrent Queue (Nov 2008)

Understanding Parallel Performance (Dec 2008)

Measuring Parallel Performance: Optimizing a Concurrent Queue (Jan 2009)

volatile vs. volatile (Feb 2009)

Read Full Post »

DDJ posted the next Effective Concurrency column a couple of weeks earlier than usual. You can find it here: “Lock-Free Code: A False Sense of Security”, just went live on DDJ’s site, and also appears in the print magazine.
 
This is a special column in a way, because I rarely critique someone else’s published code. However, mere weeks ago DDJ itself published an article with fundamentally broken code that tried to show how to write a simplified lock-free queue. I corresponded with the author, Petru Marginean, and his main reviewer, Andrei Alexandrescu, to discuss the problems, and they have patched the code somewhat and added a disclaimer to the article. But the issues need to be addressed, and so Petru kindly let me vivisect his code in public in this column (and the next, not yet available, which will show how to do it right).
 
From the article:

[Lock-free code is] hard even for experts. It’s easy to write lock-free code that appears to work, but it’s very difficult to write lock-free code that is correct and performs well. Even good magazines and refereed journals have published a substantial amount of lock-free code that was actually broken in subtle ways and needed correction.

To illustrate, let’s dissect some peer-reviewed lock-free code that was published here in DDJ just two months ago …

I hope you enjoy it. (Note: Yes, the title is a riff on Tom Cargill’s classic article “Exception Handling: A False Sense of Security”, one of Scott Meyers’ five “Most Important C++ Non-Book Publications…Ever”.)
 
Finally, here are links to previous Effective Concurrency columns (based on the magazine print issue dates):
August 2007 The Pillars of Concurrency
September 2007 How Much Scalability Do You Have or Need?
October 2007 Use Critical Sections (Preferably Locks) to Eliminate Races
November 2007 Apply Critical Sections Consistently
December 2007 Avoid Calling Unknown Code While Inside a Critical Section
January 2007 Use Lock Hierarchies to Avoid Deadlock
February 2008 Break Amdahl’s Law!
March 2008 Going Superlinear
April 2008 Super Linearity and the Bigger Machine
May 2008 Interrupt Politely
June 2008 Maximize Locality, Minimize Contention
July 2008 Choose Concurrency-Friendly Data Structures
August 2008 The Many Faces of Deadlock
September 2008 “Lock-Free Code: A False Sense of Security”

Read Full Post »

I just received the following question, whose answer is the same in C++, C#, and Java.

Question: In the following code, why isn’t the destructor/disposer ever called to clean up the Widget when the constructor emits an exception? You can entertain this question in your mainstream language of choice:

// C++ (an edited version of the original code
// that the reader emailed me)
class Gadget {
   Widget* w;

public:
   Gadget() {
     w = new Widget();
     throw new exception();
     // … or some API call might throw
   };

   ~Gadget() {
      if( w != nullptr ) delete w;
   }
};

// C# (equivalent)
class Gadget : IDisposable {
   private Widget w;

   public Gadget() {
     w = new Widget();
     throw new Exception();
     // … or some API call might throw
   };

   public void Dispose() {
     // … eliding other mechanics, eventually calls:
     if( w != null ) w.Dispose();
     // or just “w = null;” if Widget isn’t IDisposable
   }
};

// Java (equivalent)
class Gadget {
   private Widget w;

   public Gadget() {
     w = new Widget();
     throw new Exception();
     // … or some API call might throw
   };

   public void dispose() {
     if( w != null ) w.dispose();
     // or just “w = null;” if Widget isn’t disposable

   }
};

Interestingly, we can answer this even without knowing the calling code… but there is typical calling code that is similar in all three languages that reinforces the answer.

Construction and Lifetime

The fundamental principles behind the answer is the same in C++, C#, and Java:

  1. A constructor conceptually turns a suitably sized chunk of raw memory into an object that obeys its invariants. An object’s lifetime doesn’t begin until its constructor completes successfully. If a constructor ends by throwing an exception, that means it never finished creating the object and setting up its invariants — and at the point the exceptional constructor exits, the object not only doesn’t exist, but never existed.
  2. A destructor/disposer conceptually turns an object back into raw memory. Therefore, just like all other nonprivate methods, destructors/disposers assume as a precondition that “this” object is actually a valid object and that its invariants hold. Hence, destructors/disposers only run on successfully constructed objects.

I’ve covered some of these concepts and consequences before in GotW #66, “Constructor Failures,” which appeared in updated and expanded form as Items 17 and 18 of More Exceptional C++.

In particular, if Gadget’s constructor throws, it means that the Gadget object wasn’t created and never existed. So there’s nothing to destroy or dispose: The destructor/disposer not only isn’t needed to run, but it can’t run because it doesn’t have a valid object to run against.

Incidentally, it also means that the Gadget constructor isn’t exception-safe, because it and only it can clean up resources it might leak. In the C++ version, the usual simple way to write the code correctly is to change the w member’s type from Widget* to shared_ptr<Widget> or an equivalent smart pointer type that owns the object. But let’s leave that aside for now to explore the more general issue.

A Look At the Calling Code

Next, let’s see how these semantics are actually enforced, whether by language rules or by convention, on the calling side in each language. Here are the major idiomatic ways in each language to use an Gadget object in an exception-safe way:

// C++ caller

{
  Gadget myGadget;
  // do something with myGadget
}

// C# caller

using( Gadget myGadget = new Gadget() )
{
  // do something with myGadget
}

// Java caller

Gadget myGadget = new Gadget();

try {
  // do something with myGadget
}
finally {
  myGadget.dispose();
}

Consider the two cases where an exception can occur:

  • What if an exception is thrown while using myGadget – that is, during “do something with myGadget”? In all cases, we know that myGadget’s destructor/dispose method is guaranteed to be called.
  • But what if an exception is thrown while constructing myGadget? Now in all cases the answer is that the destructor/dispose method is guaranteed not to be called.

Put another way, we can say for all cases: The destructor/dispose is guaranteed to be run if and only if the constructor completed successfully.

Another Look At the Destructor/Dispose Code

Finally, let’s return to each key line of code one more time:

// C++
      if( w != nullptr ) delete w;
// C#
     if( w != null ) w.Dispose();
// Java
     if( w != null ) w.dispose();

The motivation for the nullness tests in the original example was to clean up partly-constructed objects. That motivation is suspect in principle — it means the constructors aren’t exception-safe because only they can clean up after themselves — and as we’ve seen it’s flawed in practice because the destructors/disposers won’t ever run on the code paths that the original motivation cared about. So we don’t need the nullness tests for that reason, although you might still have nullness tests in destructors/disposers to handle ‘optional’ parts of an object where a valid object might hold a null pointer or reference member during its lifetime.

In this particular example, we can observe that the nullness tests are actually unnecessary, because w will always be non-null if the object was constructed successfully. There is no (legitimate) way that you can call the destructor/disposer (Furthermore, C++ developers will know that the test is unnecessary for a second reason: Delete is a no-op if the pointer passed to it is null, so there’s never a need to check for that special case.)

Conclusion

When it comes to object lifetimes, all OO languages are more alike than different. Object and resource lifetime matters, whether or not you have a managed language, garbage collection (finalizers are not destructors/disposers!), templates, generics, or any other fancy bell or whistle layered on top of the basic humble class.

The same basic issues arise everywhere… the code you write to deal with them is just spelled using different language features and/or idioms in the different languages, that’s all. We can have lively discussions about which language offers the easiest/best/greenest/lowest-carbon-footprint way to express how we want to deal with a particular aspect of object construction, lifetime, and teardown, but those concerns are general and fundamental no matter which favorite language you’re using to write your code this week.

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 1,978 other followers