Virtual functions are a pretty basic feature, but they occasionally harbor subtleties that trap the unwary. If you can answer questions like this one, then you know virtual functions cold, and you’re less likely to waste a lot of time debugging problems like the ones illustrated below.
Problem
JG Question
1. What do the override and final keywords do? Why are they useful?
Guru Question
2. In your travels through the dusty corners of your company’s code archives, you come across the following program fragment written by an unknown programmer. The programmer seems to have been experimenting to see how some C++ features worked.
(a) What could be improved in the code’s correctness or style?
(b) What did the programmer probably expect the program to print, but what is the actual result?
class base {
public:
virtual void f( int );
virtual void f( double );
virtual void g( int i = 10 );
};
void base::f( int ) {
cout << "base::f(int)" << endl;
}
void base::f( double ) {
cout << "base::f(double)" << endl;
}
void base::g( int i ) {
cout << i << endl;
}
class derived: public base {
public:
void f( complex<double> );
void g( int i = 20 );
};
void derived::f( complex<double> ) {
cout << "derived::f(complex)" << endl;
}
void derived::g( int i ) {
cout << "derived::g() " << i << endl;
}
int main() {
base b;
derived d;
base* pb = new derived;
b.f(1.0);
d.f(1.0);
pb->f(1.0);
b.g();
d.g();
pb->g();
delete pb;
}
few quick ones:
1) why do you need to define a virtual destructor to object having no dynamic memory / pointer members. not calling it has no mem leaks right?
2) how exactly in this case:
base b;
derived d;
auto & rb = d;
auto will default to base type and not derive type. cause I expect to have the same effect as invoking a virtual call on base* and derive*, which has different outcome as we all know.
3) about final , is the first time i hear about it. hope is not same semantic as in C#, though it looks like it is.
PLEASE consider removing it. as someone pointed out please show one compelling case for it. where there are plenty of points i can give you against. it is a mess in C# !!! the most compelling reason being unit testing. mock frameworks cannot work against sealed classes. so it is a valid case regardless what were the design decision leading you to using sealed!
Some times Curiously Recurring Template Pattern can help too.
1. override means that this function should override some virtual function from the parent class. final means that this function should not be overridden in some child class.
2. a) my version of improved code would be (I skipped some code that wasn’t modified):
2. b) the actual result of execution is:
@Barry
Defaults are not inherited. Neither they are stored in virtual tables. So you can not override them, but you can kinda change them in derived classes (and face the judgement). It is a compile-time issue, so it is based on static types.
So afaik d.g() really calls derived::g(20) and pb->g() really calls derived::g(10).
I may be wrong though, don’t have a ready-to-use c++11 compiler now.
Previous comments already say that “You should avoid overriding and overloading functions at the same time if you don’t REALLY need that. It leads to errors almost every time”.
The easy way to remember this rule is to think of every virtual function as interface definition. And if you have an interface you want it to be well (strictly) defined. It means for every function you know what exactly it accepts, without maybes.
@Mikhael, isn’t d.g() supposed to call derived::g(10), not derived::g(20)? Can’t override default values.
Hello,
I really long for a ‘real-life’ example of ‘final’ use. Though I fully understand how it works, I still don’t manage to see where, in a design point of view, it can be relevant or even judicious.
Of course, I do not consider ‘value-classes’ since, as mentioned in previous gotw, the final keyword may enforce this semantic (though we could still wonder why avoiding private inheritance which can be considered as an alternative (and debatable) way of composition).
For the references classes, apart for compiler optimisation help, once a class has been designed to be refined, I fail to see a valid reason why stopping this behavior beeing a good design choice (or inheritance was not the most appropriate variation pattern).
Thank’s in advance.
Best regards.
Ad 1: I feel this has been adequately addressed in the previous comments :-)
Perhaps I could just add that “final” has two uses: member-function-wide and class-wide. However, “preventing overriding” is a good general summary of its intent in both cases (inheritance being necessary to enable overriding anyway).
Ad 2: Also agree with the already-made comments.
To expand, and to perhaps somewhat enliven the discussion, I’ll just make some (possibly unorthodox) extra points [mostly concerning part (a)]:
0. For improved modularity & re-usability (and, perhaps less importantly for this case, to also improve compile times), let’s move the class definitions to a separate header file. To help in accomplishing that without issues, I’d preemptively get rid of the using-directive “using namespace std;” and wrap the classes in a distinct namespace.
1.I won’t suggest replacing
with a smart pointer (like, say,
), and I won’t suggest replacing
with the relevant smart pointer factory (say,
, assuming C++14 or access to the well-known piece of code :]) to remove
.
Instead, I’d suggest getting rid of the free store allocation entirely! :-)
After all, it’s completely unnecessary for dynamic (run-time) polymorphism — and references work perfectly fine here.
// This point was aptly made and discussed by Bryan St. Amour in his “Need polymorphism? Consider using references” post, although I’m only able to find a reddit reference to it: http://www.reddit.com/r/cpp/comments/vo2zc/need_polymorphism_consider_using_references/
In other words, let’s get rid of this:
and use this instead:
so we can get rid of having to do this:
(Make sure to change
to
, too).
In this case we even have an extra gain of not having to create another, duplicate object.
In case this was actually necessary, we can just do:
Note that we get the automatic / scope-based life-time for free, without having to use std::unique_ptr.
2. Since the alternative interface designs are discussed (e.g., NVI) — how about using CRTP and static (compile-time) polymorphism instead?
See:
http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Curiously_Recurring_Template_Pattern
I can’t see anything that warrants dynamic (run-time) polymorphism here and IMO it brings an extra, unnecessary (and thus unwanted) complexity: I’d take even a potentially long-winded compiler diagnostic any day over debugging code with (avoidable!) run-time pointer chasing and worrying about dangling references/pointers.
It’s true that historically most books/references were presenting pointers and virtuals first, and templates last (or in a footnote, or not at all). This may (incorrectly) lead some to assume that dynamic (run-time) polymorphism is somewhat “easier” or “simpler”. Given that it’s actually more flexible and offers more features (due to late / run-time binding, ordinarily not available via CRTP), one can argue it’s exactly the opposite. While templates (and, by extension, CRTP), may sometimes generate somewhat long-winded diagnostics (however, note that this is much less of a concern with C++11, type-traits, static_assert, and, as a last resort, enable_if; it will hopefully become even less of a concern with concepts lite in C++14), at least we have a higher degree of confidence that a compiled program is correct (with the associated debugging and reasoning about the program sematics becoming simplified as well).
Given the aforementioned advances, shouldn’t the static (compile-time) polymorphism (and CRTP) be *the* default polymorphism in this day and age?
I think this is an excellent example where the NVI (non-virtual interface) pattern can be applied. Have the function with the default argument be public and non-virtual, and call a private or protected “doG(int i);”-style function that is virtual and overridden by derived classes. For the overloaded function, each could call a `doF (complex )”-style virtual protected/private function.
Somehow lost a part of my code, the derived class should be like:
1. What do the override and final keywords do? Why are they useful?
These keywords are useful for signaling to the compiler that we actually tried to override something or that something is not a subject for override. The override keyword is significantly useful to avoid errors like “overload instead of override” that tend to happen often in big class hierarchies spread across developers and are very difficult to spot otherwise. The final keyword is useful in an opposite situation — when you don’t want to override (or to be overridden) and you really mean it.
2. This is an example of the abovementioned “overloading instead of overriding” situation.
1) You should avoid overriding and overloading functions at the same time if you don’t REALLY need that. It leads to errors almost every time.
2) You should avoid giving default values to virtual functions’ parameters as this leads to a difference in behavior for normal and polymorphic invoking of these functions, which is a Bad Thing.
3) The base class should have a virtual destr
Taking that into account and assuming that we really need all this overload-override-default-parameter-stuff:
Changed: