Insights into the .NET Architecture

A Conversation with Eric Gunnerson

by Bill Venners with Bruce Eckel
February 9, 2004

Eric Gunnerson, the C# Compiler Program Manager at Microsoft, talks with Bruce Eckel and Bill Venners about several architectural design decisions in .NET, including multiple inheritance of interface, the emphasis on messaging over mobile code, internal access in assemblies, and the side-by-side execution answer to DLL Hell.

Eric Gunnerson, after having previously worked at a large Seattle aerospace company, a medium-sized PC database company, and a small VMS utility software company, joined Microsoft in the fall of 1994. After several years working on Microsoft's C++ compiler quality assurance team, including three years as the test lead on the Visual C++ compiler, Gunnerson took on a new assignment: testing the compiler of Microsoft's new language, C#. To more effectively perform his quality assurance role, Gunnerson joined the C# design team, where he spent several years working with Anders Hejslberg, Peter Golde, Scott Wiltamuth, Peter Solich, and Todd Proebsting on the design of C#. In 2002, after the initial release of Visual Studio.NET, Gunnerson switched from quality assurance to program management. He is currently the C# Compiler Program Manager. He is author of A Programmer's Introduction to C# (APress, 2001), and writes a column for MSDN, Working with C#.

On July 30, 2003, Bruce Eckel, author of Thinking in C++ and Thinking in Java, and Bill Venners, editor-in-chief of, met with Eric Gunnerson at Microsoft in Redmond, Washington. In this interview, Gunnerson discusses several architectural design decisions in .NET, including multiple inheritance of interface, the emphasis on messaging over mobile code, internal access in assemblies, and the side-by-side execution answer to DLL Hell. Comments are also contributed by Dan Fernandez, Microsoft's Product Manager for C#.

Multiple Inheritance of Interface Only

Bill Venners: The CLS, the Common Language Specification, has single inheritance of implementation, multiple inheritance of interface. Could you explain what the CLS is, and how the inheritance model was was decided upon?

Eric Gunnerson: The CLS describes how languages should behave so they can play nicely together. For example, C# has unsigned types, but unsigned types are not in the CLS because languages like VB [Visual Basic] can't handle them. I think of the CLS as an interoperability spec. If you adhere to the CLS, then you can interoperate with the other languages.

Multiple inheritance has two or three big issues. First, trying to get people to agree what multiple inheritance actually means is difficult. If we try to do multiple inheritance in the CLS, we would probably choose something along the lines of multiple inheritance in C++, and that would put languages that have other kinds of multiple inheritance in an ugly state. People would say, "Microsoft supports multiple inheritance, but they support it wrong. I can't make my language work using what they have." People would make that argument.

A second issue for me is just the cognitive complexity involved—what it takes to understand what is really going on in your code when you start using multiple inheritance. I think multiple inheritance is kind of like the advanced operator overloading in that respect. To really understand what's going on you have to think deeply. It may not hit you in a lot of cases, but there may be cases where something works strangely and you have to dig deeply to understand it. You should tend away from that kind of complexity when you are doing object design.

Now obviously one other question is, given that we're multi-language, what would you do with languages that don't support multiple inheritance? I don't see how we could ever put multiple inheritance in the CLS.

Bruce Eckel: What do you do with managed C++? Does it have multiple inheritance?

Eric Gunnerson: They don't allow multiple inheritance in managed C++ classes.

Bruce Eckel: I see, if I want a VB program to use this C++ class, I don't use multiple inheritance.

Eric Gunnerson: Another problem with supporting multiple inheritance is that if you were designing with multiple inheritance, you would do class libraries differently than you would with our current system. But then you can't do that, because some languages don't support multiple inheritance. So you might bifurcate the libraries—have one way for the multiple inheritance languages and one way for the other languages. But that would make the libraries very hard to understand.

Don't Burn Your Base Class

Bruce Eckel: Does the CLS allow multiple interface inheritance?

Eric Gunnerson: Yes.

Bruce Eckel: But that's a reasonable constraint.

Eric Gunnerson: It's a reasonable constraint if nobody causes you to do anything that makes you burn your base class.

Bill Venners: Burn my base class?

Eric Gunnerson: I'll give you the canonical example. When you want to make a component a remote object, you must derive its class from a base class called MarshalByRef. When MarshalByRef subclasses get marshaled, they get marshaled by reference. You get an object over here and a proxy over there, and the runtime handles all the mechanics. That works great, unless you actually want to use inheritance in your design. Because the designers of remoting decided to do this with the base class, they took the option of using inheritance in any other way away from you. They burned the base class.

Bill Venners: They burned the base class because if I want to remote something, I have to extend their class, not some other class I may want to extend.

Eric Gunnerson: Maybe you can get away with extending a class of yours that extends their class.

Bill Venners: But if it doesn't already extend their class and I can't change it, then I'm out of luck.

Eric Gunnerson: Yes, exactly. The constraint in supporting single inheritance of implementation, multiple inheritance of interface is that you have to pay a lot of attention about what you do. In fact if you go talk to customers that use remoting they say that exact same thing, why are you burning a base class here? You don't need to. You could use attributes or some other mechanism.

Bill Venners: So in design you should be careful not to burn your base class.

Eric Gunnerson: Yes. In design you have to be very careful.

Now, just because the CLS only directly supports multiple inheritance of interface doesn't mean that individual languages can't support more. Eiffel on .NET has multiple inheritance, for example. When compiled, it uses multiple interface inheritance and creates classes under the covers as necessary. So they do support full multiple inheritance, because Eiffel without multiple inheritance is way worse than C++ without multiple inheritance. A lot of common Eiffel idioms require multiple inheritance. It's just the way you write Eiffel code.

Mobile Data and Mobile Code

Bill Venners: A year ago, Microsoft invited me to a .NET seminar for Java authors, and I was shown something called Windows Forms. For me, Windows Forms were somewhat reminiscent of Java applets in that they were user interface components you could download across the network and run in Internet Explorer. When Java first appeared, it generated excitement because of what applets represented: untrusted code flying across the network and running in a sandbox on the client. One of my goals in attending the .NET seminar was to try and ascertain whether the CLR could support that mobile-code model. And my conclusion was that it could, but nevertheless, all the buzz I hear about building distributed systems with .NET is centered on SOAP and the exchange of XML. In other words, all the talk is about mobile data, not mobile code. What is the place of mobile code in the CLR vision? Why does the CLR support mobile code if what you're planning to send across the network most of the time is XML?

Eric Gunnerson: I'll give you my opinion, which may not be the opinion of the CLR guys. First of all, the idea of exchanging code across the network scares the hell out of me from a security standpoint. One of our big focuses is trying to allow people to write stuff that is more secure than what they're writing now. But I guess my big question is, what is the architectural advantage of using mobile code? Maybe I haven't played around with it enough, but I just don't see a lot of need for doing that on the fly. If I want to do that in .NET I can. I can go across the network and fetch some code. I can load the code out of memory, I don't have to write it out to disk first. In other words, the package of a component is an assembly. I could read a byte stream and from that create and load an assembly. So I can do that sort of thing, but why would I want to architecturally?

Dan Fernandez: You see a lot more people wanting to exchange information, rather than code, across the network. And because of firewalls, you often in practice just can't exchange code. We had a heck of a time with DCOM because of firewall requirements, and these were internal firewalls within an organization. It's very difficult to challenge the firewall policies. Security experts will open only port 80, and on port 80 they will allow only HTTP requests through. They won't allow any other protocol through.

Because SOAP uses the HTTP protocol, you can exchange and act on messages. I can call a web service from anywhere and get the message across, whether it is within an organization or across organizations. That's the grand vision. We have this great thing called the internet. Everybody's using it. It's very popular. We want to leverage that capability, that network, but that means we have to play with certain constraints. We have to use HTTP as the messaging protocol, over port 80. Basically, we must define a programming model within these confines.

Bruce Eckel: The way I see web services, it's almost like you're saying the machine is the object. I can call methods on the machine, which can be anything. And the machine can be implemented in anything.

Eric Gunnerson: Yeah, exactly. That's really the web services model.

And I think there's one other issue. It's going to sound strange when I say this, but I think we actually care more about not constraining whom you are talking to than Java does. When you send mobile code in Java, you're assuming the other side has a JVM that supports what you want to do. That brings up the same sort of issues you had with DCOM and CORBA. Anytime you're doing something that presumes what's on the other end...

Dan Fernandez: It is more tightly coupled.

Bruce Eckel: It's over-constrained.

Eric Gunnerson: It may be over-constrained. I think it really depends on the scenarios you are looking at. If you know what's on the other end, in Java you might pass classes across, in .NET you might use .NET remoting. You might pass assemblies across. I just think in a lot of cases you don't want to make that decision, because of the constraint it will put on you later.

Assemblies and Internal Access

Bill Venners: Could give a bit of an overview of how assemblies work, in particular with respect to the internal access level? The first release of Java included several com.sun packages that you weren't supposed to use directly. Sun didn't promise that in future releases of Java, the com.sun packages would be backwards compatible, but you could use them at your own risk if you just called into them. Java has several access levels, but there's no notion of a package that's private, accessible only inside its own JAR file. Anybody can see any package. I believe a .NET assembly kind of corresponds to a Java JAR file, and a .NET namespace corresponds to a Java package. Does the internal scope mean only accessible inside the assembly? Can an internal access level be applied to an entire namespace?

Eric Gunnerson: Well, namespaces don't control availability. I would explain it this way: a namespace is a convenient way of giving a class a long name. So if I have a namespace Utility and I have in it my class Multiplier, it really means that the class is named, Utility.Multiplier at the runtime level. So namespaces really aren't their own separate abstraction. Namespaces are ways of organizing the classes in a way that makes sense to programmers. Assemblies are related, but they're about organizing classes in a way that makes sense for deployment purposes. The organization you do for deployment is often analogous to what you do on the programming side, but often somewhat different.

I'll give you an example. We have a namespace System.Text for classes that do text handling. Because the functionality in there is used all the time System.Text lives in our main system assembly. But there's also System.Text.RegularExpressions, which has the regex engine. That lives in its own assembly. You don't want to require everyone to load the regex engine in all the time, because a lot of programs don't use it. So that's the kind of organizational decision that you might make differently while programming versus at deployment time.

As far as accessibility goes, we have public, private, and protected, which do what you expect, and we have internal. From the C# compiler perspective, internal really means scoped to everything that compiles together. Everything that compiles together really means everything that's in an assembly, because you take a bunch of files and compile them to produce an assembly. With internal scope, I can have classes within the assembly that kind of cooperate with each other, but people from the outside can't get access at that level. We don't have friends the way C++ has friends, and there are actually times when having friends would be very nice.

Side-By-Side Execution

Bill Venners: .NET Framework supports side-by-side execution, in which multiple versions of the .NET runtime can be used by different applications running at the same time. Isn't there a trade-off in side-by-side execution? What if I have twenty applications running, all using a different version of the same conceptual DLL, and maybe half of them would work with the most recent version. I guess memory is cheap these days, but...

Eric Gunnerson: It's a tradeoff between robustness and disk and memory use. What side-by-side execution says is that it's more important they get the version that the developer knew about and tested against than trying to save memory space and disk space.

Bruce Eckel: Is there also a way to migrate upwards? Say I'm writing for version 1.0 of the .NET Framework. Is there a way for me to say I expect my app will also work with version 2.0 of the Framework?

Eric Gunnerson: What we expect people to do is this: Say I write something and release it on version 1.0 of the Framework. Later, the version 2.0 of the Framework comes along. I can take my application and test it against version 2.0. If it works fine, I can change the configuration information that goes out with my application. I don't have to change the executable.

Dan Fernandez: Conversely, if your application is written to a particular version, you can say that particular version is required. In a config file, you can say, "This app can't work with anything less than 1.1."

Bill Venners: In theory, I like the idea of contracts. If new versions of a DLL adhere to the contract of the old version, my application should in theory still work with the new version. In practice, however, if I'm writing and delivering an application, I expect I'm just going to allow versions I've tested against. Why bother taking the risk that my application will break with new versions of the DLL?

Eric Gunnerson: The whole point here is to get away from having DLL Hell. DLL Hell was exactly that case where things worked a lot of times, but when they didn't work, you were just screwed. You just could not get out of it.

Bill Venners: So is this DLL Bliss now? What would you call it? DLL Heaven?

Eric Gunnerson: No, I wouldn't say it even comes close to that. One of the problems you have now is exactly what Bruce alluded to, how do you give people reasonable migration paths, so they can actually get new functionality without having to go back and recompile everything and change their config files? We have had a lot of discussions about that.

Bill Venners: Let's say I'm using your DLL. You come out with version 2.0. It has twice as much functionality. I wouldn't be calling that new functionality anyway, because it didn't exist when I wrote my application. So I have to write version 2.0 of my application, at which point I could compile and deploy against your 2.0 DLL. Why would I want to migrate my 1.0 application to your 2.0 DLL?

Eric Gunnerson: If you've written to DLL version 1.0, and in DLL version 2.0, the component you're using DLL was actually no different except for bug fixes, wouldn't you like to have those?

Bruce Eckel: Yes I would, unless I had adapted my code to the buggy previous version.

Next Week

Come back Monday, February 16 for part III of a conversation with C++ creator Bjarne Stroustrup. If you'd like to receive a brief weekly email announcing new articles at, please subscribe to the Artima Newsletter.


Eric Gunnerson is the author of A Programmer's Introduction to C# (Second Edition), which is available on at:

Eric Gunnerson's C# Compendium Weblog:

Dan Fernandez's Weblog:

You can find Eric Gunnerson's MSDN column, Working with C#, on this page:

Versioning, Compatibility, and Side-by-Side Execution in the .NET Framework:

Using the Microsoft .NET Framework to Create Windows-based Applications:

Private Communication Between Components in Assemblies:

What is a Microsoft Program Manager?:'s Interview with Eric Gunnerson:

.NET Books' Interview with Eric Gunnerson, Part I:

.NET Books' Interview with Eric Gunnerson, Part II:

Talk back!

Have an opinion? Readers have already posted 4 comments about this article. Why not add yours?

About the authors

Bill Venners is president of Artima Software, Inc. and editor-in-chief of He is author of the book, Inside the Java Virtual Machine, a programmer-oriented survey of the Java platform's architecture and internals. His popular columns in JavaWorld magazine covered Java internals, object-oriented design, and Jini. Bill has been active in the Jini Community since its inception. He led the Jini Community's ServiceUI project that produced the ServiceUI API. The ServiceUI became the de facto standard way to associate user interfaces to Jini services, and was the first Jini community standard approved via the Jini Decision Process. Bill also serves as an elected member of the Jini Community's initial Technical Oversight Committee (TOC), and in this role helped to define the governance process for the community. He currently devotes most of his energy to building into an ever more useful resource for developers.

Bruce Eckel ( provides development assistance in Python with user interfaces in Flex. He is the author of Thinking in Java (Prentice-Hall, 1998, 2nd Edition, 2000, 3rd Edition, 2003, 4th Edition, 2005), the Hands-On Java Seminar CD ROM (available on the Web site), Thinking in C++ (PH 1995; 2nd edition 2000, Volume 2 with Chuck Allison, 2003), C++ Inside & Out (Osborne/McGraw-Hill 1993), among others. He's given hundreds of presentations throughout the world, published over 150 articles in numerous magazines, was a founding member of the ANSI/ISO C++ committee and speaks regularly at conferences.