Bjarne Stroustrup is the designer and original implementer of C++. He is the author of numerous papers and several books, including The C++ Programming Language (Addison-Wesley, 1985-2000) and The Design and Evolution of C++ (Addison-Wesley, 1994). He took an active role in the creation of the ANSI/ISO standard for C++ and continues to work on the maintenance and revision of that standard. He is currently the College of Engineering Chair in Computer Science Professor at Texas A&M University.
On September 22, 2003, Bill Venners met with Bjarne Stroustrup at the JAOO conference in Aarhus, Denmark. In this interview, which is being published in multiple installments on Artima.com, Stroustrup gives insights into C++ best practice.
Bill Venners: I originally learned C++ from Borland's "World of C++" video. At the beginning of that video, you have a brief cameo appearance in which you state that what you were trying to do in C++ is raise the level of abstraction for programming.
Bjarne Stroustrup: That's right.
Bill Venners: What does raising the level of abstraction mean, and why is a high level of abstraction good?
Bjarne Stroustrup: A high level of abstraction is good, not just in C++, but in general. We want to deal with problems at the level we are thinking about those problems. When we do that, we have no gap between the way we understand problems and the way we implement their solutions. We can understand the next guy's code. We don't have to be the compiler.
Abstraction is a mechanism by which we understand things. Expressing a solution in terms of math, for instance, means we really did understand the problem. We didn't just hack a bunch of loops to try out special cases. There is always the temptation to provide just the solution to a particular problem. However, unless we try to generalize and see the problem as an example of a general class of problems, we may miss important parts of the solution to our particular problems and fail to find concepts and general solutions that could help us in the future. If somebody has a theory, such as a theory for matrix manipulation, you can just work at the level of those concepts and your code will become shorter, clearer, and more likely to be correct. There's less code to write, and it's easier to maintain.
I believe raising the level of abstraction is fundamental in all practical intellectual endeavors. I don't consider that a controversial statement, but people sometimes consider it controversial because they think code at a higher level abstraction is necessarily less efficient. For example, I got an email two days ago from somebody who had heard me give a talk in which I had been arguing for using a matrix library with proper linear algebra support. He said, "How much does using the matrix library cost more than using arrays directly, because I'm not sure I can afford it." To his great surprise, my answer was, "If you want the efficiency I pointed to, you cannot use the arrays directly."
The only code faster than the fastest code is no code. By abstracting to matrix manipulation operations, you give the compiler enough type information to enable it to eliminate many operations. If you were writing the code at the level of arrays, you would not eliminate those operations unless you were smarter than just about everybody. So you'd not only have to write ten times as much code if you used arrays instead of the matrix library, but you'd also have to accept a program that runs more slowly. By operating at the level where we can understand things, sometimes you can also operate at the level where we can analyze the code�we being compilers in the second case�and get better code.
My two favorite examples of this phenomenon are matrix times vector operations, where C++ on a good day can beat Fortran, and simple sorts, where C++ on a good day can beat C. The reason in both cases is you've expressed the program so directly, so cleanly, that the type system can help in generating better code�and you can't do that unless you have a suitable level of abstraction. You get this beautiful case where your code gets clearer, shorter, and faster. It doesn't happen all the time, of course, but it is so beautiful when it does.
Bill Venners: In the static versus dynamic typing debate, the proponents of strong typing often claim that although a dynamically typed language can help you whip up a prototype very quickly, to build a robust system you need a statically typed language. By contrast, the main message about static typing that I've gotten from you in your talks and writings has been that static typing can help an optimizer work more effectively. In your view, what are the benefits of static typing, both in C++ and in general?
Bjarne Stroustrup: There are a couple of benefits. First, I think you can understand things better in a statically typed program. If we can say there are certain operations you can do on an integer, and this is an integer, then we can know exactly what's going on.
Bill Venners: When you say we know what's going on, do you mean programmers or compilers?
Bjarne Stroustrup: Programmers. I do tend to anthropromorphize, though.
Bill Venners: Anthropromorphize programmers?
Bjarne Stroustrup: Anthropomorphize compilers. I tend to do that partly because it's tempting, and partly because I've written compilers. So as programmers, I feel we can better understand what goes on with a statically typed language.
In a dynamically typed language, you do an operation and basically hope the object is of the type where the operation makes some sense, otherwise you have to deal with the problem at runtime. Now, that may be a very good way to find out if your program works if you are sitting at a terminal debugging your code. There are nice quick response times, and if you do an operation that doesn't work, you find yourself in the debugger. That's fine. If you can find all the bugs, that's fine when it's just the programmer working�but for a lot of real programs, you can't find all the bugs that way. If bugs show up when no programmer is present, then you have a problem. I've done a lot of work with programs that should run in places like telephone switches. In such environments, it's very important that unexpected things don't happen. The same is true in most embedded systems. In these environments, there's nobody who can understand what to do if a bug sends them into a debugger.
With static typing, I find it easier to write the code. I find it easier to understand the code. I find it easier to understand other people's code, because the things they tried to say are expressed in something with a well-defined semantics in the language. For example, if I specify my function takes an argument of type
Temperature_reading then a user does not have to look at my code to determine what kind of object I need, looking at the interface will do. I don't need to check if the user gave me the wrong kind of object, because the compiler will reject any argument that is not a
Temperature_reading. I can directly use my argument as a
Temperature_reading without applying any type of cast. I also find that developing those statically typed interfaces is a good exercise. If forces me to think about what is essential, rather than just letting anything remotely plausible through as arguments and return values, hoping that the caller and the callee will agree and that both will write the necessary runtime checks.
To quote Kristen Nygaard, programming is understanding. The meaning is: if you don't understand something, you can't code it, and you gain understading trying to code it. That's the foreword vignette in my third edition of The C++ Programming Language. That is pretty fundamental, and I think it's much easier to read a piece of code where you know you have a
vector of integers rather than a pointer to an object. Sure, you can ask whether the object is a
vector, and if so you can ask if it holds integers. Or perhaps it holds some integers, some strings, and some shapes. If you want such containers you can build them, but I think you should prefer homogeneous
vectors that hold a specific type as opposed to a generic collection of generic objects. Why? It's really a variant of the argument for preferring statically checked interfaces. If I have a
vector<Apple>, then I know that its elements are
Apples. I don't have to cast an
Object to an
Apple to use it, and I don't have to fear that you have treated my
vector as a
vector<Fruit> and snuck a
Pear into it, or treated it as an
vector<Object> and stuck an
HydraulicPumpInterface in there. I thought that was pretty well understood by now. Even Java and C# are about to provide generic mechanisms to support that.
On the other hand, you can't build a system that is completely statically typed, because you would have to deploy the whole system compiled as one unit that never changes. The benefits of more dynamic techniques like virtual functions are that you can connect to something you don't quite know enough about to do complete static type checking. Then, you can check what interfaces it has using whatever initial interfaces you know. You can ask an object a few questions and then start using it based on the answers. The question is along the lines of, "Are you something that obeys the
Shape interface?" If you get yes, you start applying
Shape operations to it. If you get no, you say, "Oops," and you deal with it. The C++ mechanism for that is
dynamic_cast. This "questioning" using
dynamic_cast contrasts with dynamically typed languages, where you tend to just start applying the operations. If it doesn't work, you say, "Oops." Often, that oops happens in the middle of a computation as opposed to the point when the object becomes known to you. It's harder to deal with a later oops.
Also, the benefits to the compiler in terms of optimization can be huge. The difference between dynamically and a statically typed and resolved operation can easily be times 50. When I talk about efficiencies, I like to talk about factors, because that's where you can really see a difference.
Bill Venners: Factors?
Bjarne Stroustrup: When you get to percents, 10%, 50%, and such, you start arguing whether efficiency matters, whether next year's machine will be the right solution rather than optimization. But in terms of dynamic versus static, we're talking factors: times 3, times 5, times 10, times 50. I think a fair bit about real-time problems that have to be done on big computers, where a factor of 10 or even a factor of 2 times is the difference between success and failure.
Bill Venners: You're not just talking about dynamic versus static method invocation. You're talking about optimization, right? The optimizer has more information and can do a better job.
Bjarne Stroustrup: Yes.
Bill Venners: How does that work? How does an optimizer use type information to do a better job of optimizing?
Bjarne Stroustrup: Let's take a very simple case. C++ has both statically and dynamically bound member functions. If you do a virtual function call, it's an indirect function call. If it's statically bound, it's a perfectly ordinary function call. An indirect function call is probably 25% more expensive these days. That's not such a big deal. But if it's a really small function that does something like a less-than operation on an integer, the relative cost of a function call is huge, because there's more code to be executed. You have to do the function preamble. You have to do the operation. You have to do the postamble, if there is such a thing. In the process of doing all that, you have to get more instructions loaded into the machine. You break the pipelines, especially if it's an indirect function call. So you get one of these 10 to 30 factors for how to do a less-than. If such a difference occurs in a critical inner loop, the difference becomes significant. That was how the C++ sort beat the C sort. The C sort passed a function to be called indirectly. The C++ version passed a function object, where you had a statically bound inline function that degenerated into a less than.
Bill Venners: C++ culture is concerned with efficiency. Is there a lot of premature optimization going on? And how do we know the difference between early optimization that's premature versus early optimization that's prudent?
Bjarne Stroustrup: Some parts of the C++ community are concerned with efficiency. Some of them, I think, are concerned for good reasons, others just because they don't know any better. They have a fear of inefficiency that's not quite appropriate. But certainly there's an efficiency concern, and I think there are two ways of looking at it. The way I would look at efficiency is this: I would like to know that my abstractions could map in a reasonable way to the machine, and I would like to have abstractions that I can understand.
If I want to do linear algebra, I want a matrix class. If I want to do graphics, I want a graphics class. If I want to do string manipulation, I want a string class. The first thing I do is raise the level of abstraction to a suitable level. I'm using these fairly simple examples, because they're the most common and the easiest to talk about. The next thing I look out for is not to have an N2 or N3 algorithm where I don't need it. I don't go to the web for information if I have the information locally. I don't go to the disk if I have a cached version in memory. I've seen people using modeling tools that ended up writing to the disk twice to write two fields into a record. Avoid such algorithms. I think this is prudent up front design-level optimization, which is the kind of thing you should be concerned with.
Now, once you have a reasonably modeled world, with reasonably high level of abstraction, you start optimizing, and that sort of late optimization is reasonable. What I don't like is when people, who out of fear of high level features and fear of abstraction, start using a very restricted subset of the language or avoid good libraries in favor of their own hand-crafted code. They deal with bytes where they could just as well deal with objects. They deal with arrays because they fear that a
vector or a map class will be too expensive for them. Then, they end up writing more code, code that can't be understood later. That's a problem, because in any big system you'll have to analyze it later and figure out where you got it wrong.
You also try to have higher abstractions so you can measure something concrete. If you use a map, you may find that it's too expensive. That's quite possible. If you have a map with a million elements, there's a good chance it could be slow. It's a red black tree. In many cases, you can replace a map with a hashtable if you need to optimize. If you only have 100 elements, it won't make any difference. But with a million elements, it can make a big difference.
Now, if you've hacked at all at the lowest level, even once, you won't really know what you have. Maybe you knew your data structure was a map, but more likely it was an ad hoc map-like data structure. Once you realize that the ad hoc data structure didn't behave correctly, how do you know which one you can replace it with? You're working at such a low level that it's hard to get ideas. And then finally, if you've written an ad hoc data structure, you may have operations scattered all over your program. That's not uncommon with a random data structure. There's not a fixed set of operations you use to manipulate it, sometimes data is access directly from user code "for efficiency". In that case, your profiler won't show you where the bottleneck is, because you have scattered the code across the program. Conceptually the bottleneck belongs to something, but you didn't have the concept, or you didn't represent the concept directly. Your tools therefore cannot show you that this concept is what caused your problem. If something isn't in the code directly, no tool can tell you about that something by its proper name.
Come back Monday, February 23 for the next installment of this conversation with Bjarne Stroustrup. If you'd like to receive a brief weekly email announcing new articles at Artima.com, please subscribe to the Artima Newsletter.
Bjarne Stroustrup is author of The C++ Programming Language, which is available on Amazon.com at:
Bjarne Stroustrup is author of The Design and Evolution of C++, which is available on Amazon.com at:
Bjarne Stroustrup's home page:
Bjarne Stroustrup's page about the C++ Programming Language:
Preface to Third Edition where Bjarne talks about Programming is Understanding:
Publications by Bjarne Stroustrup:
Interviews with Bjarne Stroustrup:
Bjarne Stroustrup's FAQ:
Bjarne Stroustrup's C++ Style and Technique FAQ:
Bjarne Stroustrup's C++ Glossary:
Libsigc++ Callback Framework for C++:
C++ Boost, peer-reviewed portable C++ source libraries:
Al Stevens' review of The C++ Programming Language, by Bjarne Stroustrup:
Bill Venners is president of Artima Software, Inc. and editor-in-chief of Artima.com. He is author of the book, Inside the Java Virtual Machine, a programmer-oriented survey of the Java platform's architecture and internals. His popular columns in JavaWorld magazine covered Java internals, object-oriented design, and Jini. Bill has been active in the Jini Community since its inception. He led the Jini Community's ServiceUI project that produced the ServiceUI API. The ServiceUI became the de facto standard way to associate user interfaces to Jini services, and was the first Jini community standard approved via the Jini Decision Process. Bill also serves as an elected member of the Jini Community's initial Technical Oversight Committee (TOC), and in this role helped to define the governance process for the community. He currently devotes most of his energy to building Artima.com into an ever more useful resource for developers.