My CppCon talks

Also, my CppCon talks are all up on the CppCon YouTube channel. You can find them here:

I hope you find them useful.

13 thoughts on “My CppCon talks

  1. Hi Herb! Thank you for your talk on CppCon 2014! It was really enlighning and helpful!

  2. Say we have 20 people writing a piece of infrastructure code that controls lifetime of the objects and uses tables with shared pointers. We have 10s of APIs that expose plain pointers to these objects (no lifetime control). We have have several groups writing infrastructure.

    We have 1000+ developers working on 4 continents who can get assigned to do a bug fix in a relatively random piece of code. We do not want them to be able to call delete on a pointer even if they feel that it is a great idea.

    These are non-owning pointers I am talking about. Again it is not a big deal to work around make_share+private-destructor issue one way or another but somehow it does not seem right.

  3. How does a non-owning pointer (such as a child node’s back pointer to the parent that owns it and so will outlive it) imply a private destructor?

  4. There is one thing somehow bothers me a lot. Non-owning pointer means private destructor, private destructor means make_shared would not work. Yes one add some boiler plate code and do the same with out make_shared – but is not the way towards basics ans simplicity.

  5. Also, regarding the question during the video on whether the dtor for your example slist using atomic shared_ptr will ‘blow the stack:’ I actually have experienced this in one of my data structures. I was keeping a similar structure alive via shared_ptr reference counts. In the limit, it looks just like your slist structure’s dtor freeing a long list by letting the refcounts drop to 0 starting from the root and following ->next.

    When compiled under GCC w/out optimization under Linux, my program crashed due to stack overflow. If I raised the stack limit to a very large value, it did not crash. And, when I compiled in GCC with optimization, it apparently implemented tail-recursion optimization (or put less incidental data on the stack), and did not crash even with the default stack depth.

  6. Due to time constraints I only watched the first 40min of Juggling Razor Blades part 2, so I apologize if you covered this later.

    In the atomic shared_ptr implementation, pop exchanges p and p->next. Won’t this leave p pointing to itself after removal, and thus is a memory leak? Seems you would need to release p->next after the swap succeeds, correct?

  7. Re for(e:c): Some people were concerned about this creating a pitfall where our implicitly declaring a new variable e when there might be an e already in scope would hide it and surprise the programmer. So the proposer, STL, is going to work with those who had the concerns offline between meetings with the goal of coming up with an updated proposal for next meeting in Lenexa that will address the concerns, possibly via a slightly different syntax. Just as a strawman example, one possible syntax I have heard mentioned is “for e(c)” which may make it more obvious you’re declaring a variable by being closer to the syntactic pattern of “widget w(” / “auto w(” / “for w(” all declaring local variables.

  8. Herb, is there any info published about why for(e : c) was not accepted by the committee? I haven’t been able to find anything. If not, can you give us some info on it? Thanks.

  9. The lecture on lock-free programming is very interesting! In the view that lock-free means that “someone makes progress” I have had an aha!-experience recently. Maybe CSP-type channels aren’t “blocking” at all! We have stated this over the years. And maybe this community, who said for years that “blocking is evil” was right? Have we aliased the word blocking? I think so. I have written about both this and atomic in my latest blog notes. They could be found from my home page below. Disclaimer: I have no binding or commercial interest in those blog notes.

  10. I believe I figured it out, the answer, Can probably be more then one thread waiting for its turn to acquire the lock and therefore it is necessary to check again entering step 3. therefore the meaning after all for double check lock.
    (: sorry for the spam.

  11. I do have a question regards your talk about Lock-Free Programming part 1, that I wish to clarify, when you explained the correct way of doing double check locking, you mentioned 6 steps:

    1 first check
    2 acquire
    3 check again
    4 create and assign
    5 release
    6 return

    If more than one thread may pass step 1, any of the threads that acquire the lock in step 2 should be safe till the end of step 5, where the lock_guard goes out of scope and the mutex get released.

    Q: From step 2-5 how many threads can acquire the lock? if the answer is 1 then why cant just omit step 3? if the answer is more then one then is 2-5 really atomic?

  12. Hello Herb,

    First of all, thank you for the great material that you’re providing. I watched your presentations on lock-free programming and I have several questions about the final singly linked list implementation that you are showing using shared_ptr.

    In your push() function, you are using make_shared to allocate your Node. However, what guarantees do you have on the memory-allocation side where the allocator might grab a global mutex to synchronize threads ? If that’s the case, then you can’t guarantee that your function is lock-free since malloc might grab a global lock and might prevent the system for making global progress.

    Second, about the shared_ptr and atomic_load / store overloads. This is nice but this might also be implemented with a global lock for all shared_ptr instance (for instance is_lock_free might return false), or some lock-stripping mechanism I guess. Again, how can you guarantee that your function is lock-free if the implementation underneath is not ? Also, in a ready-heavy workload, wouldn’t incrementing the shared_ptr’s ref-count be a bottleneck ? Everytime you traverse the list, you have to atomically increment the refcount for every node. I’m not sure it it will scale if you have bazillon of threads reading the list. Also, if I understood correctly, you are solving the problem of the memory reclamation (when popping a node), by relying on the shared_ptr refcount to delete the node when the last reader just releases the reference to the node. This means that you might trigger a delete on a read-side critical section. In a real-time scenario where you might want to keep your reads as fast as possible, I’m not sure that this is the default behaviour that you want, especially if the destruction of the object is not free. I know that you have to make trade-offs, but I’m wondering what are your thoughts.

  13. I’m not sure how I feel about the addition of things like for (e : c) {}, and your point about it not being a “breaking change” is weird. Don’t macros mean any change is a “breaking change”?

    I’m not saying these kind of additions are a bad thing, but I feel that the C++ committee is focussed on adding new things (often strange and complicated things), rather than on actually maintaining and fixing the language – the committee is taking the easy way out for marketing points.

    In the end what I’m getting at is… Why the heck hasn’t anyone fixed vector bool yet?

    Here’s a few arguments for doing so:

    1. Not fixing it cripples future programmer productivity (i.e. it’s going to cost everyone who uses C++ money).
    2. The future contains more time (and hence C++ code) than the past (i.e. more value).
    3. There’s no such thing as a “breaking change”. No one’s going back to change old compilers to not compile old code.
    4. It’s only ever going to get harder to fix it. So fix it now.
    5. Every person who encounters it begins to hate C++ just a little, because wtf is this ****?

Comments are closed.