Yesterday’s Build talk is now online

That was fast!

9 thoughts on “Yesterday’s Build talk is now online

  1. isn’t your performance tip at minute 42:00 a little bit lame

    AIComponent, PhysicsComponen, RenderComponents seems to be
    interfaces – or else it makes less sense to put pointers to them into

    so normaly the objects will be accessed usinge the interface AND
    they can have different sizes – like AIComponentVersion1, AIComponentVersion2

    this way is not possible to used contiguous space vectors – so whats the point?

  2. @Malte Skarupke

    Inserting into a sorted vector is O(n), not O(n^2).
    Inserting into a set is O(log(n)), not O(n log(n)).

  3. Are there any plans to go more in depth on doing cross platform mobile dev in c++, using native ui & language for the interface on each platform? I know xamarin is an option to do this but I’d really like to learn more about doing this with a c++ code base. Are there any talks or presentations about this?

  4. Hi,
    loved the talk but a few comments about the list vs vector comparison. One slide (at 49 minutes in) states “Vector beats list massively for insertion and deletion.” This is not strictly true, because in your benchmark vector beats list for search, not insertion. Insertion and deletion will almost always be faster in list, as long as you know where you want to insert an element. In your benchmark you have to search for the insertion point though, and the faster search through vector outweighs the faster insertion in list. So I am not refuting your benchmark, just the explanation.

    As for your benchmark of map vs sorted vector: I can not reproduce your results. I used a std::set (which uses a tree like std::map) and that is much faster at inserting random numbers. Inserting n random numbers into a sorted vector should be O(n^2) whereas inserting them into a set should be O(n log(n)) and that is exactly what I see in my benchmarks. For me a sorted vector is only faster if you have less than 1000 elements.

    What’s a good way to post my benchmark code here? I don’t know how tot link to another website from the comments, and I’m not sure if I should just dump 50 lines of C++ code in the comments here.

  5. Herb I maybe misheard you or misunderstood you, but you talking about constant factors being very low…
    it sounded like you are claiming desired operation on list and vector have diff complexity but they have the same one. O(n). If you had iterator to place you want to insert to it would be O(1) for list but you dont have…
    All what this example says is that reading is not 123x slower than writing, aka the fact you must read on avg n/2 elems for list and write one and for vec you need to write n/2 doesnt hurt vector because writing is not orders of magnitude slower than read.
    disclaimer: could be wrong, please reply and not just downvote(im ok with TD, i just need to know why)

  6. wrt gotw-54, based on your current talk and the proliferation of advanced processor behaviours since the gotw was written, will you be rethinking “In Most Cases, Prefer Using deque” to include some kind of size constraint?

    Also can you please place the code you used to generate the graph on to github or some such, I know it’s trivial code and can be easily written, but it’d be nice to see your version.

  7. Hi Herb!

    Great talk! Really important topics.

    I have two minor remarks.

    – First, I think you should have mentioned that you can enable vectors with SBO using a stack_allocator (e.g. Howard Hinnant’s one), so you can really use the same interface for heap and stack allocated vectors and remove all allocations! This is a bit of a drawback between array and vector, they do not share the same interface (which is IMO a huge advantage of Haskell Sequence containers, they all share the same “list-like” interface, so they are drop-in replacements).

    – Second, your slide 43 recommends map and unordered_map for dictionary lookup. I think one should recommend boost::flat_set/flat_multi_set/flat_map/flat_multi_map first, since they are _contiguous_ and are just so much faster. They can be slower if you are inserting a lot, but in that case they give you the vector, you push_back at the end, sort it, and give it back to them. Perfect performance.

    IMO a point worth remarking is that there is also a huge Array culture in C++ for the reasons you mention. Not only for matrix types, but for trees, and geometry. Want an array-based octree for a particle system? Want sorted vectors (like flat_set)? Unboxed vectors (i.e. vectors that perform AOS to SOA transformations)? Jagged vectors (flat vectors of vectors)? C++ has all of these. There is a large culture and a really nice ecosystem of array-based data-structures in C++ because they are really hardware friendly (as opposed to paper/blackboard friendly). They are fast.

  8. Recently I read “C# for Systems Programming” on Joe Duffy’s Blog and then “.NET native” was announced. It will be interesting to see how C++ holds up.

Comments are closed.