GotW #5: Overriding Virtual Functions

Virtual functions are a pretty basic feature, but they occasionally harbor subtleties that trap the unwary. If you can answer questions like this one, then you know virtual functions cold, and you’re less likely to waste a lot of time debugging problems like the ones illustrated below.

 

Problem

JG Question

1. What do the override and final keywords do? Why are they useful?

Guru Question

2. In your travels through the dusty corners of your company’s code archives, you come across the following program fragment written by an unknown programmer. The programmer seems to have been experimenting to see how some C++ features worked.

(a) What could be improved in the code’s correctness or style?

(b) What did the programmer probably expect the program to print, but what is the actual result?

class base {
public:
virtual void f( int );
virtual void f( double );
virtual void g( int i = 10 );
};

void base::f( int ) {
cout << "base::f(int)" << endl;
}

void base::f( double ) {
cout << "base::f(double)" << endl;
}

void base::g( int i ) {
cout << i << endl;
}

class derived: public base {
public:
void f( complex<double> );
void g( int i = 20 );
};

void derived::f( complex<double> ) {
cout << "derived::f(complex)" << endl;
}

void derived::g( int i ) {
cout << "derived::g() " << i << endl;
}

int main() {
base b;
derived d;
base* pb = new derived;

b.f(1.0);
d.f(1.0);
pb->f(1.0);

b.g();
d.g();
pb->g();

delete pb;
}

11 thoughts on “GotW #5: Overriding Virtual Functions

  1. 1. What do the override and final keywords do? Why are they useful?

    These keywords are useful for signaling to the compiler that we actually tried to override something or that something is not a subject for override. The override keyword is significantly useful to avoid errors like “overload instead of override” that tend to happen often in big class hierarchies spread across developers and are very difficult to spot otherwise. The final keyword is useful in an opposite situation — when you don’t want to override (or to be overridden) and you really mean it.

    2. This is an example of the abovementioned “overloading instead of overriding” situation.

    1) You should avoid overriding and overloading functions at the same time if you don’t REALLY need that. It leads to errors almost every time.
    2) You should avoid giving default values to virtual functions’ parameters as this leads to a difference in behavior for normal and polymorphic invoking of these functions, which is a Bad Thing.
    3) The base class should have a virtual destr

    Taking that into account and assuming that we really need all this overload-override-default-parameter-stuff:

    #include <iostream>
    #include <complex>
    using namespace std;
    
    class base {
    public:
        virtual void f( int );
        virtual void f( double );
        virtual void g( int i = 10 );
    };
    
    void base::f( int ) {
        cout << "base::f(int)" << endl;
    }
    
    void base::f( double ) {
        cout << "base::f(double)" << endl;
    }
    
    void base::g( int i ) {
        cout << i << endl;
    }
    
    class derived: public base {
    public:
        void f( complex<double> );
        void g( int i = 20 );
    };
    
    void derived::f( complex<double> ) {
        cout << "derived::f(complex)" << endl;
    }
    
    void derived::g( int i ) {
        cout << "derived::g() " << i << endl;
    }
    
    int main() {
        base    b;
        derived d;
        base*   pb = new derived;
    
        b.f(1.0); // calls base::f(double)
        d.f(1.0); // calls derived::f(std::complex) !!
        pb->f(1.0); // calls base::f(double)
    
        b.g(); // calls base::g(10)
        d.g(); // calls derived::g(20)
        pb->g(); // calls derived::g(10) !!
    
        delete pb; // This one would be a memleak if we had any data in derived !!
    }
    
    

    Changed:

    #include <iostream>
    #include <complex>
    using namespace std;
    
    class base {
    public:
        virtual void f( int );
        virtual void f( double );
        virtual void g( int i); // do not provide a default;
        inline void g() { g(10); } // provide a new def instead
        virtual ~base() {}; // provide a virtual destructor
    };
    
    void base::f( int ) {
        cout << "base::f(int)" << endl;
    }
    
    void base::f( double ) {
        cout << "base::f(double)" << endl;
    }
    
    void base::g(int x) {
      cout << x << endl;
    } 
    
    class derived: public base {
    public:
        void f( const complex<double>& );
        void g( int i = 20 );
    };
    
    void derived::f( const complex<double>& ) {
        cout << "derived::f(complex)" << endl;
    }
    
    void derived::g( int i ) {
        cout << "derived::g() " << i << endl;
    }
    
    int main() {
        base    b;
        derived d;
        base*   pb = new derived;
    
        b.f(1.0); // calls base::f(double)
        d.f(1.0); // calls base::f(double)
        pb->f(1.0); // calls base::f(double)
    
        b.g(); // calls base::g() (which calls base::g(10))
        d.g(); // calls derived::g(20) 
        pb->g(); // compiler error
    
        delete pb; // This one is ok now
    }
    
    
  2. Somehow lost a part of my code, the derived class should be like:

    class derived: public base {
    public:
        using base::f; // this is essential if you mean overloading across class hierarchies
        void f( const complex<double>& );
        void g( int i = 20 );
    };
    
  3. I think this is an excellent example where the NVI (non-virtual interface) pattern can be applied. Have the function with the default argument be public and non-virtual, and call a private or protected “doG(int i);”-style function that is virtual and overridden by derived classes. For the overloaded function, each could call a `doF (complex )”-style virtual protected/private function.

  4. Ad 1: I feel this has been adequately addressed in the previous comments :-)
    Perhaps I could just add that “final” has two uses: member-function-wide and class-wide. However, “preventing overriding” is a good general summary of its intent in both cases (inheritance being necessary to enable overriding anyway).

    Ad 2: Also agree with the already-made comments.

    To expand, and to perhaps somewhat enliven the discussion, I’ll just make some (possibly unorthodox) extra points [mostly concerning part (a)]:

    0. For improved modularity & re-usability (and, perhaps less importantly for this case, to also improve compile times), let’s move the class definitions to a separate header file. To help in accomplishing that without issues, I’d preemptively get rid of the using-directive “using namespace std;” and wrap the classes in a distinct namespace.

    1.I won’t suggest replacing

    base*

    with a smart pointer (like, say,

    std::unique_ptr<base>

    ), and I won’t suggest replacing

    new

    with the relevant smart pointer factory (say,

    make_unique<derived>

    , assuming C++14 or access to the well-known piece of code :]) to remove

    delete

    .

    Instead, I’d suggest getting rid of the free store allocation entirely! :-)

    After all, it’s completely unnecessary for dynamic (run-time) polymorphism — and references work perfectly fine here.

    // This point was aptly made and discussed by Bryan St. Amour in his “Need polymorphism? Consider using references” post, although I’m only able to find a reddit reference to it: http://www.reddit.com/r/cpp/comments/vo2zc/need_polymorphism_consider_using_references/

    In other words, let’s get rid of this:

    base*   pb = new derived;

    and use this instead:

    derived base & rb = d;

    so we can get rid of having to do this:

    delete pb;

    (Make sure to change

    pb->

    to

    rb.

    , too).

    In this case we even have an extra gain of not having to create another, duplicate object.
    In case this was actually necessary, we can just do:

    derived another_d;
    derived base & rb = another_d;

    Note that we get the automatic / scope-based life-time for free, without having to use std::unique_ptr.

    2. Since the alternative interface designs are discussed (e.g., NVI) — how about using CRTP and static (compile-time) polymorphism instead?

    See:
    http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Curiously_Recurring_Template_Pattern

    I can’t see anything that warrants dynamic (run-time) polymorphism here and IMO it brings an extra, unnecessary (and thus unwanted) complexity: I’d take even a potentially long-winded compiler diagnostic any day over debugging code with (avoidable!) run-time pointer chasing and worrying about dangling references/pointers.

    It’s true that historically most books/references were presenting pointers and virtuals first, and templates last (or in a footnote, or not at all). This may (incorrectly) lead some to assume that dynamic (run-time) polymorphism is somewhat “easier” or “simpler”. Given that it’s actually more flexible and offers more features (due to late / run-time binding, ordinarily not available via CRTP), one can argue it’s exactly the opposite. While templates (and, by extension, CRTP), may sometimes generate somewhat long-winded diagnostics (however, note that this is much less of a concern with C++11, type-traits, static_assert, and, as a last resort, enable_if; it will hopefully become even less of a concern with concepts lite in C++14), at least we have a higher degree of confidence that a compiled program is correct (with the associated debugging and reasoning about the program sematics becoming simplified as well).

    Given the aforementioned advances, shouldn’t the static (compile-time) polymorphism (and CRTP) be *the* default polymorphism in this day and age?

  5. Hello,

    I really long for a ‘real-life’ example of ‘final’ use. Though I fully understand how it works, I still don’t manage to see where, in a design point of view, it can be relevant or even judicious.

    Of course, I do not consider ‘value-classes’ since, as mentioned in previous gotw, the final keyword may enforce this semantic (though we could still wonder why avoiding private inheritance which can be considered as an alternative (and debatable) way of composition).

    For the references classes, apart for compiler optimisation help, once a class has been designed to be refined, I fail to see a valid reason why stopping this behavior beeing a good design choice (or inheritance was not the most appropriate variation pattern).

    Thank’s in advance.

    Best regards.

  6. @Mikhael, isn’t d.g() supposed to call derived::g(10), not derived::g(20)? Can’t override default values.

  7. Previous comments already say that “You should avoid overriding and overloading functions at the same time if you don’t REALLY need that. It leads to errors almost every time”.
    The easy way to remember this rule is to think of every virtual function as interface definition. And if you have an interface you want it to be well (strictly) defined. It means for every function you know what exactly it accepts, without maybes.

  8. @Barry
    Defaults are not inherited. Neither they are stored in virtual tables. So you can not override them, but you can kinda change them in derived classes (and face the judgement). It is a compile-time issue, so it is based on static types.
    So afaik d.g() really calls derived::g(20) and pb->g() really calls derived::g(10).
    I may be wrong though, don’t have a ready-to-use c++11 compiler now.

  9. 1. override means that this function should override some virtual function from the parent class. final means that this function should not be overridden in some child class.
    2. a) my version of improved code would be (I skipped some code that wasn’t modified):

    class base {
    	public:
    		virtual void f( int ) final; //if function is not meant to be overridden, it should be final
    		virtual void f( double ) final;
    		virtual void g( int i ); //avoid using default values in virtual functions, they are chosen statically
    		virtual void g( ); //use separate virtual function without arguments instead
    		virtual ~base() {}
    };
    
    void base::g( ) {
    	g(10);
    }
    
    class derived: public base {
    	public:
    		using base::f; //to make f functions from base visible here
    	        void f( complex<double> );
    		void g( int i ) override; //override makes the compiler check if this function really override something
    		void g( ) override;
    };
    
    void derived::g( ) {
    	g(20);
    }
    

    2. b) the actual result of execution is:

    	b.f(1.0); //base::f
    	d.f(1.0); //derived::f( complex<double> ), expected base::f( double )
    	pb->f(1.0); //base::f
    
    	b.g(); //10
    	d.g(); //20
    	pb->g(); //10 cause defaults are chosen statically, expected 20
    
  10. Some times Curiously Recurring Template Pattern can help too.

    #include <iostream>
    #include <complex>
    
    using namespace std;
    
    template <class Derived>
    class baseT {
        Derived& derived() {  return *static_cast<Derived*>(this);  }
    public:
        void f( int ) { cout << "base::f(int)" << endl; }
        void f( double ) { cout << "base::f(double)" << endl; }
        void g( int i ) { derived().g(i); }
        void g( ) { derived().g();  }
    };
    
    class base : public baseT<base> { 
    public:
        void g( int i ) { cout << i << endl; }
        void g( ) { g(10);  }
    };
    
    class derived : public baseT<derived> { 
    public:
        using baseT<derived>::f; //make f() from base visible here
        void f( const complex<double> & ) { cout << "derived::f(complex)" << endl; }
        void g( int i ) { cout << "derived::g() " << i << endl; }
        void g( )  { g(20); }
    };
    
    int main() {
        base    b;
        derived d;
        auto & rb = d;
     
        b.f(1.0);       //base::f(double)
        d.f(1.0);       //base::f(double)
        rb.f(1.0);      //base::f(double)
     
        b.g();          //10
        d.g();          //derived::g() 20
        rb.g();         //derived::g() 20
    
        return 0;
    }
    
  11. few quick ones:
    1) why do you need to define a virtual destructor to object having no dynamic memory / pointer members. not calling it has no mem leaks right?

    2) how exactly in this case:
    base b;
    derived d;
    auto & rb = d;

    auto will default to base type and not derive type. cause I expect to have the same effect as invoking a virtual call on base* and derive*, which has different outcome as we all know.

    3) about final , is the first time i hear about it. hope is not same semantic as in C#, though it looks like it is.
    PLEASE consider removing it. as someone pointed out please show one compelling case for it. where there are plenty of points i can give you against. it is a mess in C# !!! the most compelling reason being unit testing. mock frameworks cannot work against sealed classes. so it is a valid case regardless what were the design decision leading you to using sealed!

Comments are closed.