Sponsored Link •
On Monday, December 10, 2001 Bill Venners visited the Sun Microsystems campus in Santa Clara, California to interview Josh Bloch, an architect in the Core Java Platform Group. In this interview, Josh discusses many aspects of design.
Over the past several years, Sun Microsystems Architect Joshua
Bloch has designed major enhancements to the Java APIs and
language, specifically for the Java Collections API and the
java.math package. Most recently, he lead the expert
groups that defined Java's assert and preferences facilities. In
his 2001 book, Effective Java Programming Language
Guide, Josh distilled his wisdom into 57 concrete guidelines
for designing and implementing Java programs. In this interview,
Josh discusses API design, extreme programming, code quality and
reuse, refactoring, defensive copies trust, forgiveness,
immutables, documenting for inheritance or disallowing it, copy
constructors, factory methods, the equals method, and more.
Bill Venners: In the preface of your fine book, Effective Java Language Programming Guide, you write that you tend to think in terms of API design. I do too. If I were managing a large software project, I would want to decompose it into subsystems and have people design interfaces to those subsystems -- interfaces that would be APIs.
Considering that, how does an API-design approach contrast with the popular extreme programming approach, and to what extent should API design be the norm in software projects?
Joshua Bloch: In my experience, there is too much monolithic software construction. Someone says that he wants to design a record-oriented file system, and does it. He starts designing the record-oriented file system and sees where it leads him, rather than follow this decomposition you speak of. Decomposition into subsystems is important, but as important is to have each subsystem be a well-designed, freestanding abstraction. That's where I feel like a preacher on a soapbox.
It's much easier to avoid turning the subsystem into a reasonable component. In particular, it's easy to let reverse dependencies creep in, where you write the low-level subsystem for the use of its initial higher-level client, and you let assumptions about that client creep downwards. In the case of less experienced programmers, it's more than assumptions. You let variable names creep downwards, and you let specific artifacts of that initial client creep into the allegedly lower-level reusable component. When you finish, you don't have a reusable component, you just have a piece of a monolithic system.
You want to weaken that coupling such that the subsystem can then be reused outside of its original context. And there are all sorts of reasons for doing that, which I go over in my book. You write something for one use, and it subsequently finds its major use elsewhere. But that only works if a subsystem is a well-designed, freestanding abstraction.
Bill Venners: One of the early slogans about the object-oriented approach stated that it promoted reuse. But I think people found in practice that they didn't reuse much. Everybody needed something slightly different from what already existed, so they wrote that new thing from scratch. Perhaps things weren't designed for reuse, but, nevertheless, people still managed to build software. To what extent do you think reuse is important?
Josh Bloch: Reuse is extremely important but
difficult to achieve. You don't get it for free, but it is
achievable. The projects I do here at Sun -- the Java Collections
java.math, and so on -- are reusable
components. I think they have been quite successful at being used
by a number of vastly different clients.
In my last job, where I built systems, I found that 75 percent of the code I wrote for any given system was reusable in other systems. To achieve that reuse level, I had to consciously design for it. I had to spend a large fraction of my time decomposing things into clean, freestanding abstractions, debugging them independently, writing unit tests, that sort of thing.
Many developers don't do those steps. You touched on it when you asked how API design contrasts with extreme programming, and what do you do if you are a manager building something. One extreme programming tenet advocates you write the simplest thing that can solve your problem. That's a fine tenet, but it's easy to misconstrue.
The extreme programming proponents don't advocate writing something that will barely work as fast as you can. They don't advise you to forgo any design. They do advocate leaving out the bells, whistles, and features you don't need and add them later, if a real need is demonstrated. And that's incredibly important, because you can always add a feature, but you can never take it out. Once a feature is there, you can't say, sorry, we screwed up, we want to take it out because other code now depends on it. People will scream. So, when in doubt, leave it out.
Extreme programming also stresses refactoring. During the refactoring process, you spend much of your time cleaning up the components and APIs, ripping things into better modules. It is critical to do this, and to stay light on your feet -- don't freeze the APIs too early. But you'll have less work to do if you design the intermodular boundaries carefully to begin with.
Bill Venners: Why?
Josh Bloch: Because massive refactorings prove difficult. If you built something as a monolithic system and then find you had repeated code all over the place, and you want to refactor it properly, you'll have a massive job. In contrast, if you wrote it as components, but you got some of the component boundaries a little wrong, you can tweak them easily.
I think the disconnect between extreme programming and the API-based design process I espouse is not as great as it appears. When you talk to someone like Kent Beck [author of Extreme Programming Explained], I think you'll find that he does much of the same stuff I do.
To get back to your first question, if you are a manager, you should certainly give your team the latitude to create a good design before they jump in and start coding. At the same time, you shouldn't let them design every bell and whistle in the world; you should ensure they design the minimal system that will do the job.
Venners: So it sounds the extreme programming folks are trying to say, "Do the simplest feature set that could possibly work." But not, "Do the quickest slop you can throw down that could work possibly."
Josh Bloch: Precisely. In fact, people who try to "do the quickest slop you can throw down" often take longer to produce a working system than people who carefully design the components. But certainly, API design helps if you consider cost over time. If you throw down some slop, and, God forbid, the slop becomes immortalized as a public API that must be lived with for years, you really are toast. Such APIs become a tremendous support burden over time, and lead to great customer dissatisfaction.
Bill Venners: You also claim in your book that thinking in terms of APIs tends to improve code quality. Could you clarify why you think that.
Josh Bloch: I'm talking about programming in the large here. It's relatively easy to write high-quality code if you are tackling a reasonably sized problem. If you do a good decomposition into components, you'll be able to concentrate on one thing at a time, and you'll do a better job. So doing good programming in the large leads to good programming in the small.
Moreover, modular decomposition represents a key component of software quality. If you have a tightly coupled system, when you tweak one component, the whole system breaks. If you thought in terms of APIs, the intermodular boundaries are clear, so you can maintain and improve one module without affecting the others.
Bill Venners: Can you clarify what you mean by "programming in the large" and "programming in the small?"
Josh Bloch: By "programming in the large," I mean tackling a big problem -- a problem too big to sit down and solve with one little freestanding program; a problem big enough that it must be broken down into sub problems. "Programming in the large" involves the complexity issues inherent in a large problem. In contrast, "programming in the small" asks: How can I best sort this array of floats?
Bill Venners: How does focusing on API design serve you well in the refactoring process? I believe I heard you say that developers won't have to do as much refactoring.
Josh Bloch: That's part of it. In addition, refactoring is often ex post facto API design. You look at the program and say: I have almost the same code here, here, and here. I can break this out into a module. Then you carefully design that module's API. So, whether you do it at refactoring time or up front, it's the same process.
In truth, you always do a bit of both. Programming is an iterative process. You try to do the best you can up front, but you don't really know whether you have the API right until you use it. Nobody gets it right the first time, even if they have years of experience.
Doug Lea [author of Concurrent Programming in Java] and I chat about this issue from time to time. We write stuff together, and, when we try to use it, things don't always work. In retrospect, we make obvious API design mistakes. Does this mean Doug and I are dumb? Not really. It's just impossible to predict exactly what the demand will be on an API until you have tried it. That's why whenever you write an interface or an abstract class, it's critical to do as many concrete implementations as possible before committing to the API. It's difficult or impossible to change it after the fact, so you better make sure it's good beforehand.
Bill Venners: Now I'd like to talk about trust. To what extent should I trust client programmers to do the right thing? You write in your book about making defensive copies of objects passed to and from methods. Defensive copying is an example of not trusting clients. Is there not a robustness versus performance tradeoff to defensive copying? Indeed, if you have arbitrarily large objects, could it be expensive to defensively copy every time?
Josh Bloch: Clearly there is a tradeoff. On the other hand, I don't believe in attacking performance problems too early. If it's not a problem, why bother attacking it? And there are other ways around the problem. One, of course, is immutability. If something can't be modified, then you don't have to copy it.
It is true you might opt in favor of performance rather than robustness and not copy something if you believe you are operating in a safe environment -- an environment where you know and trust your clients won't do the wrong thing. Certainly you'll find most C-based programs filled with comments saying: It is imperative that this method's client not modify the object after it is called, blah, blah, blah. Yes, you can successfully write code that way. Anyone who has programmed in C or C++ has done so. On the other hand, it's more difficult because you forget you cannot modify it, or you have aliasing. You don't modify it, but, oops, you've passed a reference to the same object to someone else who doesn't realize he shouldn't modify it.
All things being equal, it is easier to write correct code if you actually do the defensive copying or use immutability than if you depend on the programmer to do the right thing. So, unless you know that you have a performance need to allow this sort of error to happen, I think it's best to simply disallow it. Write the program, then see if it runs fast enough. If it doesn't, then decide whether you want to carefully relax those restrictions.
Generally speaking, you should not allow an ill-behaved client to ruin a server. You want to isolate failures from one module to the next, so that a failure in one module can't break a second module. It's a defense against intentional failures, as in hacking. And more commonly, it's a defense against sloppy programming or against bad documentation, where a user of some module doesn't understand his responsibilities in terms of modifying or not modifying some data object.
Bill Venners: If I defensively copy an object passed into, say, a constructor, should I document that defensive copying as part of the class's contract? If I don't document it, I may have the flexibility later to remove the defensive copy for a performance tweak. But if I don't document it, client programmers can't be sure the constructor will do a defensive copy. So they may do a defensive copy themselves before they pass the object to my constructor. Then we'll have two defensive copies.
Josh Bloch: If you haven't documented it, is a client permitted to modify the parameter or isn't he? Obviously, if you have a paranoid client, he won't modify the parameter because it might hurt your module. In practice, programmers aren't that paranoid -- they do modify. All things being equal, if the documentation doesn't say you must not do this, the programmer will do it. So, you are signing on for that defensive copy whether or not you document it. And you might as well document it because then even the paranoid client will know that, yes, he has the right to do anything he wants with the input parameter.
Ideally, you should document that you are doing a defensive copy. However, if you look back at my code, you'll find that I haven't. I defend against sloppy clients, but I haven't always made this explicit in the documentation.
One problem with writing widely distributed source code is that people can go back and look at your code and say: Ah, but you didn't do it here. Generally speaking, I respond: Well, I learned a few things since I wrote that; now I know I have to do it. In this case, it's a question of how careful is your documentation? I'm probably not careful enough, and, yes, I think it's worth being careful.
Bill Venners: Should I trust that passed
objects correctly implement their contract? Recently, I had to
Set implementation that had a consistent
serialized form between different virtual machines. I was writing
a class I called
ConsistentSet as a subclass of your
AbstractSet, and I had a constructor that took
Set. In this constructor I simply wanted to copy
the elements out of the passed
Set into an array
that formed the state of the new
put the elements in an array so that whenever the
ConsistentSet was serialized in any virtual machine,
the elements would get serialized in the same order.
As I wrote the code that pulled the elements out of the
Set and copied them into the array, I
wondered whether I should trust that this passed
contains no duplicates? Because if I do, I'm relying on someone
else to implement his contract correctly. And if the passed
Set violates its contract by containing duplicate
elements, it breaks my class. I'm also a
also supposed to contain no duplicates, so perhaps I should
program defensively and check for duplicates in the passed
Set as I copy its elements to my array. On the other
hand, isn't a basic idea of object-oriented programming that you
divide up responsibilities among different objects, and let each
object perform its role as promised in its contract?
Josh Bloch: I think you have no choice but to trust that objects implement their contracts. Once people start violating their contracts, the whole world falls apart. A simple example: Equal objects must have equal hash codes. If you created types for which this isn't true, hash tables and hash sets wouldn't work.
More generally, when objects stop obeying their contracts, objects around them start to break -- that's just the way it is. I can understand why it makes you nervous, but, yes, you do have to trust objects to implement their contracts. If you feel nervous, you can take a hint from the intelligence community and "trust but verify." The best way to do this is with assertions, because you don't pay for them when you turn them off. Use assertions to test that other objects are obeying their contracts, and, if your program starts acting strangely, enable assertions and you may well find out who is to blame.
Bill Venners: Should I trust subclasses more intimately than non-subclasses? For example, do I make it easier for a subclass implementation to break me than I would for a non-subclass? In particular, how do you feel about protected data?
Josh Bloch: To write something that is both
subclassable and robust against a malicious subclass is actually
a pretty tough thing to do, assuming you give the subclass access
to your internal data structures. If the subclass does not have
access to anything that an ordinary user doesn't, then it's
harder for the subclass to do damage. But unless you make all
your methods final, the subclass can still break your contracts
by just doing the wrong things in response to method invocation.
That's precisely why the security critical classes like
String are final. Otherwise someone could write a
subclass that makes
Strings appear mutable, which
would be sufficient to break security. So you must trust your
subclasses. If you don't trust them, then you can't allow them,
because subclasses can so easily cause a class to violate its
As far as protected data in general, it's a necessary evil. It should be kept to a minimum. Most protected data and protected methods amount to committing to an implementation detail. A protected field is an implementation detail that you are making visible to subclasses. Even a protected method is a piece of internal structure that you are making visible to subclasses.
The reason you make it visible is that it's often necessary in order to allow subclasses to do their job, or to do it efficiently. But once you've done it, you're committed to it. It is now something that you are not allowed to change, even if you later find a more efficient implementation that no longer involves the use of a particular field or method.
So all other things being equal, you shouldn't have any protected members at all. But that said, if you have too few, then your class may not be usable as a super class, or at least not as an efficient super class. Often you find out after the fact. My philosophy is to have as few protected members as possible when you first write the class. Then try to subclass it. You may find out that without a particular protected method, all subclasses will have to do some bad thing.
As an example, if you look at
you'll find that there is a protected method to delete a range of
the list in one shot (
removeRange). Why is that in
there? Because the normal idiom to remove a range, based on the
public API, is to call
subList to get a
List, and then call
clear on that
List. Without this particular protected method,
however, the only thing that
clear could do is
repeatedly remove individual elements.
Think about it. If you have an array representation, what will
it do? It will repeatedly collapse the array, doing order N work
N times. So it will take a quadratic amount of work, instead of
the linear amount of work that it should. By providing this
protected method, we allow any implementation that can
efficiently delete an entire range to do so. And any reasonable
List implementation can delete a range more
efficiently all at once.
That we would need this protected method is something you would have to be way smarter than me to know up front. Basically, I implemented the thing. Then, as we started to subclass it, we realized that range delete was quadratic. We couldn't afford that, so I put in the protected method. I think that's the best approach with protected methods. Put in as few as possible, and then add more as needed. Protected methods represent commitments to designs that you may want to change. You can always add protected methods, but you can't take them out.
Bill Venners: And protected data?
Josh Bloch: The same thing, but even more. Protected data is even more dangerous in terms of messing up your data invariants. If you give someone else access to some internal data, they have free reign over it.
Bill Venners: OK, enough of trust. How
about forgiveness? To what extent should I be forgiving of
clients? For example, imagine a constructor that takes an array
Config objects. Should I allow
in that array? I could document clearly that
elements are allowed and simply ignored. Maybe allowing
nulls makes the class easier to use, but on the
other hand, it may also be missing an opportunity to catch
null reference bugs earlier.
Josh Bloch: I think you pretty much answered
your own question. I agree with the latter observation.
Basically, I think this is one of these specious things. Some
people claim they want this freedom, but in practice, once they
have it, all it does is masks bugs. Also, there are general
conventions for these things. In Java, the convention is not that
null means a zero length array.
null. If you pass a
something, it often invokes a method on it and throws a
If an API allows
nulls to exist longer, it isn't
doing you any favor. It's just pushing the exception off to the
next API that you pass the thing to. Often, it's better to just
enforce the rules uniformly. Some people will complain,
especially because the convention isn't completely universal.
There are APIs that do let you pass around
an abbreviation for zero length arrays or for an empty string,
etc. And those APIs are in a sense bad citizens, because once you
mix them with APIs that don't, you're in trouble.
This is one of these few places where I feel like some sort of puritan. But I have found that it's easier to write robust correct systems if you are maybe a little less forgiving on input. On the other hand, this is a controversial issue and the greater the variety of clients you have to operate with, the more forgiving you should be. For example, if a browser threw up its hands every time it hit bad HTML, that would be a disaster. There are millions of people writing HTML, and many of them have no clue how to write syntactically perfect HTML.
Bill Venners: That's true, but on the
other hand most people probably look at their web pages in a
browser before publishing them. If the browser didn't work until
the HTML was correct, then you'd probably have much less bad HTML
on the web. And this issue is not not just about
nulls. It's a general philosophy issue in design.
I've met people who say you should be forgiving. If somebody
passes something that's a little weird, you make assumptions
about their intent and go on, instead of throwing an exception
and killing everything. They think it is better to try and keep
going than to bring everything to a screeching halt.
Josh Bloch: One of the biggest problems with forgiveness is you start to lose precision. Often the specs on what can come in are designed to let you precisely state your intentions. If you get something that isn't syntactically valid and you try to intuit what the programmer's intentions were, you may come up with something that does not match their intentions. That's why we have formal languages. There are places where we are utterly unforgiving.
For instance, if you look at
does not tolerate leading or trailing white space. Occasionally,
people complain about his, but I think it can be justified on
these grounds. There's a precise definition for what constitutes
legitimate string representation of an integer, and that's what
you have to provide. Generally speaking, I find the things that
are looser are the ones where we get into trouble.
An extreme example of this is the persistent form of
load methods of class
java.util.Properties emit properties to disk and
pull them in. The on-disk format tries to be forgiving, but it
doesn't do a terribly good job of it. If you look at the bug
database, you'll probably see tens if not hundreds of bugs having
to do with the inability to get something reasonable on input
when you output some properties and edit them, or take them from
one locale to another, or one system to another.
The reason for all these bug reports is that we didn't write a BNF for the property file format. If we had just said, "Look, this is the syntax of a legitimate on-disk properties file. Obey it and you'll have no problems," that would have been great. But we didn't do that. Instead, we said, "This is roughly speaking what a legitimate on-disk properties file is, and we're a little flexible in what we accept." In practice, there is no clear demarcation of what constitutes legitimate and what doesn't. And there is no clear mapping from what's on disk to the actual logical object.
If you look at
Preferences, which is a
Properties, you'll see that it uses
XML with a DTD as it's on-disk representation. And there's no
doubt as to what constitutes a legitimate
Preferences document. Either it is or it isn't.
Either it obeys the DTD or it doesn't.
Bill Venners: Should I document in the contract of a class that it is immutable? If I don't, I may have the option of making the class mutable in a future version. But if I don't mention immutability in the contract of the class, clients may feel the need to clone instances of it before passing them to or from methods. If I do document immutability and the class is subclassable, then I'm basically relying on the kindness of the person who does the subclass to make sure the subclass is also immutable.
Josh Bloch: If it's not subclassable, then you are documenting it, assuming you are documenting your class properly, whether you use the words "represents an immutable complex number" or not. If the set of operations that you have does not permit mutation, then you have documented that it's immutable.
I think you should document as clearly as possible, though, so I think using the word immutable is not a bad idea at all. If it's subclassable, then it's actually part of the contract that the subclasser must maintain, so I think you should get right out and say, "These are immutables," so anyone making subclasses knows they should maintain that immutability.
But as you've alluded to, subclassability and immutability are somewhat at odds with one another. It is possible to do something that's both immutable and extendable but it demands great care. The most you can do, if you assume an unfriendly subclasser, is to make all the methods final and say, "You are permitted to add methods to this." Because the moment you start letting someone override the methods, they can do things that imply mutability.
For example, let's say you're subclassing
(in some alternate universe where it isn't final). Any of the
operations that actually do something to the
that search for substrings, etc., can return different results
depending on when they are called. They can return random
results, if you really have a sense of humor. And in doing so,
you would have broken the immutability of
So if you want something both subclassable and immutable, you better make all the methods final. This raises the question, why bother making it subclassable? Why not just make it final or make no accessible constructors? Let someone who wants to add methods write a "utility class" consisting of static methods that take one of the things on input.
Generally speaking, my view is that classes that are immutable
should not be subclassable, and I've goofed in this area. The
first big thing that I did when I arrived here in 1996 was
java.math. If you look at
BigDecimal, you'll see that they are both
immutable and subclassable. I know in my heart of hearts that
this is wrong. In fact, the methods aren't final, so you can make
BigInteger appear mutable by subclassing it and
then just lying in response to
divide operations. Whereas
you can't do this with
String because it's genuinely
String is final, so it does what it says
Bill Venners: That makes sense, but I think it also conflicts somewhat with what you said earlier (in Part I of this interview) about trusting that classes implement their contract. A subclass is a class. If its contract says that it must maintain immutability, then should we trust it to do that? It's a hard line to draw. When do I trust and when do I stop trusting?
Josh Bloch: You need to trust, but on the other hand, you can also obey reasonable caution. It is true that when you allow subclassing, you are opening the door for more difficulties, so you should only do this if you have a good reason to do it. This is something that I arrived at over time. Subclassing is great for certain things. In particular, I think within a library--within a trust boundary--it's just great. If you are implementing a whole bunch of different collections classes, it's really nice if you can share a mechanism by subclassing.
HashMap. All the hashing stuff is inside
HashMap. It works nicely. But on the other hand,
once you do this across libraries you are trusting people who
maybe you shouldn't trust, because it's the world at large that
you are trusting. And you have to ask yourself, "What am I
getting in exchange for this?"
When it comes down to these immutable value types, what you're getting in exchange for it is you are letting people add methods to your immutable value type. It's not clear that this is important. When I am using somebody else's class, I'm just as happy to write a static utility class to "add" methods. And I would hope that other people are happy to do the same thing to my classes.
So, basically, I think you shouldn't make something extendable unless you have a good reason to do it.
Bill Venners: You just said "value types." The last time I talked to James Gosling, I asked him when would he make a class immutable versus mutable. He said he'd make classes immutable whenever he could, because you don't have to worry about cloning it before you pass it and so on. In your book you seem reiterate that philosophy. The way I have always thought about immutables, though, was primarily as value classes. The kind of classes that felt like immutables to me were things like a complex number, a big integer, a bit set, or maybe a matrix. But if I have an abstraction that models, say, a vending machine, where I have money going in and candy bars coming out, that looks like a state machine and feels to me like a mutable object. When would you use immutables in designs?
Josh Bloch: There are all sorts of ways you
could design a vending machine. For starters, you'll probably
have a type to represent the state. By all means, that should be
immutable. The type to represent the state could be either
type-safe enum or a primitive
int. You should
probably use a type-safe enum. If I have a variable deep in the
bowels of my machine that represents the current state, the last
thing I want is somebody else from the outside being able to
change the state of my machine by modifying this object that's
doing nothing but holding the state.
So, I think there is a place for immutability, even if you decide to go with the mutable design, which I think is perfectly reasonable. There is a way to do a state machine immutably, which is basically to duplicate the entire machine each time you do a state transition. But I think that is forced, and I don't do that sort of design generally.
You can look at things that I've done, where there is a state
transition diagram, like the
Timer class in
Timer object does go
through state transitions. But that said, I think that when you
are designing a state transition system, you should make it as
simple as possible -- in a sense, as immutable as possible.
So if something is naturally modeled as a state machine, then clearly, there would be a mutable object that is the state machine. But on the other hand, you should carefully analyze the state transition diagram, make it as simple as possible--give it as few states as possible and as few arcs as possible. And then you should document the thing to a T. Most of the really broken classes that I have dealt with are the ones that have complicated state transition diagrams that were not well documented. If a state transition diagram isn't well documented, people just can't use the class. Client code that works will break in the next release because the implementation will be changed in a way that subtlety changes the state transition diagram. Because it was never documented in the first place, people depend on implementation details that are no longer accurate. So whenever you have mutable classes, you should be very conscious of the fact that there is a state transition diagram, and you should document it to your user.
This is one of the big differences between APIs designed by people with a lot of experience APIs designed by novices. Experienced programmers really do try to minimize the state space and the people with less experience don't realize that this is an important aspect of any API.
Bill Venners: Sounds like the guideline is "minimize mutability."
Josh Bloch: Absolutely. It's funny you should phrase it that way, because I once gave a talk on API design where each section had an alliterative title. And I came up with the same alliteration that you did.
Bill Venners: I wanted to talk about your recommendation that we document for inheritance or disallow it.
Josh Bloch: I should come clean right up
front. That was a deliberate overstatement. If you look at our
APIs, you'll see that there are a bunch of them that are neither
designed for inheritance nor do they disallow it. For example,
Hashtable isn't designed for inheritance, but
doesn't disallow it. And in fact, people do subclass it. They do
override methods, and they do produce garbage.
The statement was a reaction to stuff that I'd seen, but I didn't start really putting it into practice until recently. Realistically, I don't expect people will start designing for inheritance or disallowing it outright, but I hope they will start thinking in these terms. At least they will start disallowing untrammeled inheritance, where you can override anything and produce complete garbage. I hope the advice will cause people to write more final classes, more immutable classes, generally safer classes. But I don't expect that as of tomorrow, everybody will start designing for inheritance or disallowing it.
Bill Venners: I kind of got from reading your book that this was something that you'd come to more recently. It made sense to me, but I had really never thought about it before. When I first read about final classes in Gosling and Arnold's The Java Programming Language book, it said, "Be careful. Making classes final is an extremely severe restriction on clients, and you should only do it for security reasons." And so, I think I still kind of have that mindset. I am sheepish about making classes final because it seems so drastic.
Josh Bloch: You don't need that mindset anymore. My view is you can always add something, but you can't take it away. Make it final. If somebody really needs to subclass it, they will call you. Listen to their argument. Find out if it's legitimate. If it is, in the next release you can take out the final. In terms of binary compatibility, it is not something that you have to live with forever. You can take something that was final and make it non-final. If you had no public constructors, you can add a public constructor. Or if you actually labeled the class final, you can remove the access modifier. If you look at the binary compatibility chapter of the JLS (Chapter 13), you will see that you have not hurt binary compatibility by adding extendibility in a later release.
Bill Venners: Well, if a class is
Serializable I think it will often be impractical to
remove final in a later version of the class. Let's say you and I
are two VMs on a network, and you have version one of a class,
which is final. I have version two of that class, in which final
has been taken away, plus a subclass of the now non-final class.
If I attempt to serialize and send you an instance of the
subclass, it won't deserialize on your side. Because on your side
the class exists, but at version one, which is final.
Josh Bloch: Yep.
Bill Venners: In your book you recommend
using a copy constructor instead of implementing
Cloneable and writing
clone. Could you
elaborate on that?
Josh Bloch: If you've read the item about
cloning in my book, especially if you read between the lines, you
will know that I think clone is deeply broken. There are a few
design flaws, the biggest of which is that the
Cloneable interface does not have a
clone method. And that means it simply doesn't work:
Cloneable doesn't say anything
about what you can do with it. Instead, it says something about
what it can do internally. It says that if by calling
super.clone repeatedly it ends up calling
clone method, this method will
return a field copy of the original.
But it doesn't say anything about what you can do with an
object that implements the
which means that you can't do a polymorphic
operation. If I have an array of
would think that I could run down that array and clone every
element to make a deep copy of the array, but I can't. You cannot
cast something to
Cloneable and call the
clone method, because
have a public
clone method and neither does
Object. If you try to cast to
and call the
clone method, the compiler will say you
are trying to call the protected
clone method on
The truth of the matter is that you don't provide any
capability to your clients by implementing
and providing a public
clone method other than the
ability to copy. This is no better than what you get if you
provide a copy operation with a different name and you don't
Cloneable. That's basically what you're
doing with a copy constructor. The copy constructor approach has
several advantages, which I discuss in the book. One big
advantage is that the copy can be made to have a different
representation from the original. For example, you can copy a
LinkedList into an
clone method is very
tricky. It's based on field copies, and it's "extra-linguistic."
It creates an object without calling a constructor. There are no
guarantees that it preserves the invariants established by the
constructors. There have been lots of bugs over the years, both
in and outside Sun, stemming from the fact that if you just call
super.clone repeatedly up the chain until you have
cloned an object, you have a shallow copy of the object. The
clone generally shares state with the object being cloned. If
that state is mutable, you don't have two independent objects. If
you modify one, the other changes as well. And all of a sudden,
you get random behavior.
There are very few things for which I use
Cloneable anymore. I often provide a public
clone method on concrete classes because people
expect it. I don't have abstract classes implement
Cloneable, nor do I have interfaces extend it,
because I won't place the burden of implementing
Cloneable on all the classes that extend (or
implement) the abstract class (or interface). It's a real burden,
with few benefits.
Doug Lea goes even further. He told me that he doesn't use
clone anymore except to copy arrays. You should use
clone to copy arrays, because that's generally the
fastest way to do it. But Doug's types simply don't implement
Cloneable anymore. He's given up on it. And I think
that's not unreasonable.
It's a shame that
Cloneable is broken, but it
happens. The original Java APIs were done very quickly under a
tight deadline to meet a closing market window. The original Java
team did an incredible job, but not all of the APIs are perfect.
Cloneable is a weak spot, and I think people should
be aware of its limitations.
Bill Venners: When would you use a factory method versus a constructor?
Josh Bloch: I like factory methods, as you probably know from the book. Factory methods give you a lot of flexibility in terms of what type you can return: you can return any subclass of a declared type. They give you the ability not to create an object each time they are invoked. For immutable types, they are just great, because they can save you from producing a whole bunch of functionally identical objects.
For example, the
Boolean type, which is a boxed
boolean, simply should not have had public
constructors. It is basically a type-safe enum, and it should
have been one. There should only be two
objects in a VM at any time, one
true and one
false. There's really no great advantage to allow
trues or multiple
I've seen programs that produce millions of
and millions of
falses, creating needless work for
the garbage collector.
So, in the case of immutables, I think factory methods are great. In the case of classes that reasonably admit multiple implementations, for which you want the ability to change implementations over time, or change implementations at run time based on usage characteristics, I think they are good.
I also think factory methods are useful if you find yourself
with a bunch of constructors that have a bunch or arguments, and
it's hard to keep them straight. When you read a constructor
invocation and you see a bunch of different arguments, you don't
know what they are there for. It's nice if you can replace the
new BigInteger() with
BigInteger.probablePrime. Now you know what that
invocation is doing--it's producing a probable prime.
In fact, if you look at
BigInteger, you'll see
that we did exactly that. We needed to change the semantics of
the prime generation stuff a little bit. At the same time as we
did that, we changed it from a constructor to a static factory.
Once again, this represents an evolution in our thinking over the
The only time you really need accessible constructors is when you want to allow for subclassing, as subclass constructors must invoke superclass constructors. If you want to be subclassable, then you pretty much can't use static factories.
Bill Venners: Those constructors can be protected and not public.
Josh Bloch: They can. That's another approach, to have public static factories and protected constructors. And I must confess, I haven't actually written to that style much, I think because I tend not to write classes that encourage subclassing.
When you are writing a class, you can run down the my book's list of the advantages of static factories over public constructors. If you find that a significant number of those advantages actually apply in your case, then you should go with the static factories. Otherwise you should go with the constructors.
Some people were disappointed to find that advice in my book.
They read it and said, "You've argued so strongly for public
static factories that we should just use them by default." I
think the only real disadvantage in doing so is that it's a bit
disconcerting to people who are used to using constructors to
create their objects. And I suppose it provides a little less of
a visual cue in the program. (You don't see the
keyword.) Also it's a little more difficult to find static
factories in the documentation, because Javadoc groups all the
constructors together. But I would say that everyone should
consider static factories all the time, and use them when they
Bill Venners: When should I use a
List versus an array as a return type or parameter
Josh Bloch: Darn good question. The equation
changes, of course, when we get generics. Right now, if you want
to provide a statically typed return value or input parameter,
then you have to use an array.
Lists are not
statically typed, because they are ordinary old interfaces and
there is no way to parameterize a user type these days. Arrays,
because they are part of the language, have the distinction of
being the only generic type, or parameterized type, in the
language. So, if it's very important to you to have static
typing, then you must use arrays.
The other thing is that arrays, once again, are fairly
traditional. They are less disconcerting to people than
Lists have not been
widely adopted yet as the currency for passing stuff in and
On the other hand,
Lists are nice. They are very
flexible. You can pass anything, including an array, on input if
you have an input parameter that's a
List. So, if
you are writing something where you really don't know where the
input is coming from, then
List makes a nice input
type. If someone happens to have an array, they can say
Arrays.asList and pass the array in as a
List. And that doesn't involve a copy. It's a simple
wrapping operation with a small, constant cost.
On the other hand, if it will be used in a more tightly
defined context and you really would like to have that compile
time type safety, so that you don't get a
ClassCastException at run time if you were expecting
List of mail messages and you got a
List of mail folders, then I think you should use
the arrays. Arrays are also likely to be a bit cheaper, but on
the other hand, you know how I feel about performance. If it's
not a bottleneck, then you might as well do the right thing,
rather than the fast thing.
So right now, a lot of people use arrays to get compile time
type safety, and they certainly have my blessing until the
language has generics. Once you can pass in
mail messages and
List of mail folder as two
distinct types, then I think it becomes much more reasonable to
List on input.
In fact, we haven't really thought hard about this stuff, but
I suppose it would even be possible to have arrays implement the
List interface. So that when you take a
List on input, you can also pass an array without
even wrapping it. That would be a major change. When we put the
collections framework into the platform, we did it without any
changes to the language. But I think it might not have been
unreasonable to do exactly that, to turn arrays into
You touched on the only two reasonable input types,
List and array. Don't use
input, don't use
HashTable. They don't cut it. Of
Map is okay as well. On output, of course,
you can use the other things if you are willing to commit to
Bill Venners: In your book, you say it's usually not a good idea to do object pooling for lightweight objects, because the VM is very efficient at garbage collecting lightweight objects. By "lightweight" do you mean it doesn't take much time to construct the object, or the object doesn't take much memory? Define heavyweight versus lightweight object.
Josh Bloch: I really do mean time. But on the other hand, if it takes a lot of memory, it probably takes a lot of time, because you are initializing all that memory. Basically the idea is that you'd like to amortize the work when you've made a heavyweight object. The canonical example is a database connection, where you do all sorts of native calls to establish the session. And it can take a major fraction of a second (or more) to do this. It would be silly to use it for a few calls, discard it, and then spend that time again to do a few more calls the next time someone needs to access the database. You should decide up front how much resources you want to allocate to this. You should create, say, five open connections, or whatever. You should have this pool of connections and just share them among all of the clients.
It's more about computation time than it is about memory. I'm not an expert in this area, but the people in the 2D graphics community talk about pooling frame buffers as well, where there are huge quantities of memory involved (megabytes). It's also about predictability. If something is big, and you have strict bounds on how many you will need, it might be reasonable to leave them around and pool them yourself.
equalsmethod, should I use
Classcomparison to determine if the passed object is the "same" type as this one?
Josh Bloch: The book has an item about the
equals method in the 'Methods Common to all Objects'
chapter. All the
equals methods discussed in that
instanceof rather than
getClass. I knew that some people use
getClass, but I thought the technique was not widely
used. The essay was long and complex already, so I thought I
wouldn't confuse the issue by discussing another technique which
I thought was inferior. But it turned out to be a mistake. It
getClass-based equals methods are fairly
widely used and discussed in other books. This turns out to be
perhaps the most controversial item in the book. I got a lot of
email on the subject.
The reason that I favor the
is that when you use the
getClass approach, you have
the restriction that objects are only equal to other objects of
the same class, the same run time type. If you extend a class and
add a couple of innocuous methods to it, then check to see
whether some object of the subclass is equal to an object of the
super class, even if the objects are equal in all important
aspects, you will get the surprising answer that they aren't
equal. In fact, this violates a strict interpretation of the
Liskov substitution principle, and can lead to very
surprising behavior. In Java, it's particularly important because
most of the collections (
HashTable, etc.) are based
equals method. If you put a member of the
super class in a hash table as the key and then look it up using
a subclass instance, you won't find it, because they are not
Because of these problems, I didn't even bother discussing the
getClass approach. But it turns out that because it
does let you add aspects while preserving the equals contract,
some people favor it. So I just want to get the information out
there that it has disadvantages too. The biggest disadvantage is
the fact that you get two objects that appear equal (because they
are equal on all the fields) but they are not equal because they
are of different classes. This can cause surprising behavior.
I'm going to write an essay discussing this in more detail. When it's done, I'll put it up on the book's web site. It will compare the two approaches and discuss the pros and cons.
Bill Venners: The reason I have used
getClass in the past is because I don't know what is
going to happen in subclasses. I can't predict that when I am
writing my class. It seemed risky to me to use
instanceof. If I used
someone passed a subclass instance to my superclass
equals method, I would determine semantic equality
by comparing only fields that exist in the superclass. I'd be
ignoring any fields declared in subclasses, and it seems like
they could be important to the notion of semantic equality for
The other way to look at it I guess is that using
instanceof in a superclass
method makes it harder to write subclasses, because
equals is supposed to be symmetric. If you call the
equals method implementation, passing in
an instance of the superclass, it must return the same result as
if you passed the subclass instance to the superclass's
Josh Bloch: Yes. You are correct in saying
equals makes it
much easier to preserve the
equals contract, but at
what cost? Basically, at the cost of violating the Liskov
substitution principle and the principle of least
astonishment. You write something that obeys the contract,
but whose behavior can be very surprising.
Bill Venners: One thing you said in your book that I thought was interesting is that there is no way in Javadoc to separate the responsibilities and contract for people who use the class, from the responsibilities and contract for people who make subclasses.
Josh Bloch: Agreed, and I really think it would be an improvement if that were so, because the current situation is the two are intermixed and all the stuff for subclasses merely confuses the great multitude of programmers who won't subclass it. In particular, protected methods confuse novice programmers who aren't going to subclass a class, but are going to use it. I would prefer that they simply didn't see those protected methods when they look at Javadoc output. I think Javadoc should have views of the class. I think there should be a subclasser view that gives the contractual responsibilities and the protected fields that are available to subclassers, and that normal users should be shielded from this.
Bill Venners: In you book you say, "It is always beneficial to detect programming errors as quickly as possible." I've met people who don't feel that way: people from the Smalltalk community, people who like Python, and so on. These people feel that all those compile time errors get in the way of their productivity. They feel more productive in a weakly typed environment, where more problems must be discovered at runtime. These people feel that their weakly-typed language of choice gives them as much robustness, but more quickly, than strongly-typed languages such as Java.
Josh Bloch: I quibble with the fact that they are getting as much robustness. I suppose the extreme example of that is shell scripts, which are interpreted. There is no compile time. You can code anything you want. And I think anyone who has used shell scripts has seen them blow up in the field. In fact, people don't expect them to run on all inputs. If you take a shell script, try to do something fancy with it, and it doesn't work, you say "Oh well, I guess it doesn't handle that." And you play around with the inputs and try to find something it does handle.
There's no doubt that you can prototype more quickly in an environment that lets you get away with murder at compile time, but I do think the resulting programs are less robust. I think that to get the most robust programs, you want to do as much static type checking as possible.
I do understand that people coming from these environments really do find static type checking constraining. It's no fun to deal with compile-time errors, but there are real benefits in terms of robustness. So, you pay your money and you take your choice.
For Joshua Bloch's Effective Java Programming Language
Guide (Addison Wesley Professional, 2001; ISBN: 0201310058),
Portions of this interview were first published January 4,
2002 in JavaWorld: