Over the last decade, Martin Fowler pioneered many software development techniques in the development of business information systems. He's well known for his work on object-oriented analysis and design, software patterns, Unified Modeling Language, agile software processes (particularly extreme programming), and refactoring. He is the author of Analysis Patterns (Oct. 1996), Refactoring (June 1999; coauthored with Kent Beck, et al.), UML Distilled (Aug. 1999; with Kendall Scott), Planning Extreme Programming (Oct. 2000; with Kent Beck), and the soon to be released Patterns of Enterprise Application Architecture (Nov. 2002), all published by Addison Wesley.
In this six-part interview, which will be published in weekly installments, Fowler gives his views on many topics, including refactoring, design, testing, and extreme programming. In this initial installment, Fowler makes the business case for refactoring and testing, and describes the interplay between refactoring, design, and reliability.
Bill Venners: Define refactoring.
Martin Fowler: Refactoring is making changes to a body of code in order to improve its internal structure, without changing its external behavior.
Bill Venners: If refactoring doesn't add features or fix bugs, what is the business case for it? How do you justify refactoring?
Martin Fowler: Refactoring improves the design. What is the business case of good design? To me, it's that you can make changes to the software more easily in the future.
Refactoring is about saying, "Let's restructure this system in order to make it easier to change it." The corollary is that it's pointless to refactor a system you will never change, because you'll never get a payback. But if you will be changing the system—either to fix bugs or add features—keeping the system well factored or making it better factored will give you a payback as you make those changes.
Bill Venners: In a team, different people have different tastes. A friend of mine worked at a company where a new guy joined the group, and he had particular ideas of what makes a good name. He liked to put
m_ in front of member names, underscores between each word, and so on. He made some simple refactorings like the kind you recommend, but those refactorings caused some problems.
For example, some of the company's code was secret because of licensing restrictions, so legally the new guy couldn't access it. He ended up breaking that code, and my friend had to fix it. His refactorings also caused problems when my friend needed to fix bugs in multiple branches of the source code that represented different releases. Different branches now contained different names, so it was harder to fix the bugs in multiple releases. Also, people who had been there a while, like my friend, were used to the old names. At one point my friend couldn't find a method she was looking for because it had been renamed. The new guy's refactorings caused those kinds of problems.
Who in a team decides what's a good name? Who decides when and how to refactor?
Martin Fowler: Refactoring doesn't mean you pathologically rename a bunch of things just because you think it's good. You refactor if there's some benefit. If you're renaming, you look at some method that perhaps doesn't convey what it's supposed to do and the people who work on it prefer to call it something else. As far as naming conventions, the team must come up with the naming conventions they want to work with. You must be aware of that if you come in from the outside. If I came in as a consultant, I wouldn't impose my particular code conventions. I'd ask the team what its conventions are, review them, and use those. On the other hand, I would oppose a method called
x374, because it doesn't convey the meaning.
As far as breaking a piece of secret code—that tells me their testing wasn't strong enough. Testing is a very important underpinning to refactoring.
Bill Venners: You say in your book Refactoring: "If you want to refactor, the essential precondition is having solid tests." Does that mean if you don't have tests you shouldn't refactor?
Martin Fowler: You should think of it as walking a tightrope without a net. If you are good at walking a tightrope, and it's not that high up, then you might try it. But if you've never walked a tightrope before, and it's over Niagara Falls, you probably want a good net.
Bill Venners: What is the business case to add tests retroactively if you have no tests currently?
Martin Fowler: Again, you don't want to be surprised when the code breaks because somebody has changed it. One great thing JUnit-style tests give you is the ability to run them and see if you've broken anything. It's no big deal if you aren't changing anything, but if you're adding features or fixing bugs there's always the chance you'll break something. As you get better at your tests, you become more confident about the things you can change. As a result, you can get some high reliability rates.
Reliability is one story that hasn't been told about extreme programming (XP), which focuses heavily on testing. People talk a lot about its responsiveness and its light weight, but I hear more stories about XP's staggeringly high reliability. A couple weeks ago I chatted with Rich Garzaniti, an old colleague from the C3 project. The C3 project is often referred to as the birth project of XP at Chrysler where Kent first really put the various practices together coherently. Rich talked about this system he was developing using XP with testing and refactoring -- drinking the whole XP Kool-Aid completely through the system. He's had one bug so far this year.
Bill Venners: In Refactoring, you write: "I write tests to improve my productivity as a programmer." What about robustness, quality, and reliability?
Martin Fowler: They all come as part of the package. My view is that defects interfere with our productivity, because we have to take time to fix them. If I can eliminate a defect, I improve my productivity. The fact that I get more robust, reliable software is a valuable side effect. But fundamentally, I can program more features if I don't spend my time debugging and fixing bugs.
Bill Venners: So you invest the time writing tests that you expect to reap back by not fixing bugs.
Martin Fowler: I'll reap the time back within a day, because I spend so much less time debugging. I reap my costs back quickly on tests. Then of course all the other benefits come into play.
Bill Venners: Do you think that quick payback would result if a body of code has absolutely no tests?
Martin Fowler: It takes a longer time. I don't recommend you spend two months only writing tests. But I think you do benefit quickly by adding some tests, because you begin to find problems. If you focus your tests on the areas of the code you're changing, you'll find that when you make mistakes the tests will let you know about them. Obviously, you get the full benefit when you have a comprehensive set of tests, but even a few tests will start that process and begin to give you feedback.
Bill Venners: You said unit testing helps you be more productive, but you also say that one benefit you get from refactoring is the ability to program faster. How does refactoring help you program faster?
Martin Fowler: Because a better designed program is easier to change. The better the program design, the easier you can alter the program, and therefore, increase your productivity.
Bill Venners: You write that you refactor not because it's fun, but because "there are things you expect to be able to do with your programs if you refactor that you just can't do if you don't refactor." Other than change, what can you do with your programs if you refactor that you can't do if you don't refactor?
Martin Fowler: Change is the driving reason. We constantly have to change software—living software at least. Refactoring is about improving the design, and we want a good design so we can change it more easily. Refactoring is similar to things like performance optimizations, which are also behavior- preserving transformations. But the kinds of moves you make in performance optimization are different than refactoring, and the overall process is slightly different, because optimizations should be driven by profiling.
Bill Venners: So refactoring means making changes that don't add or change functionality for clarity, so I can easily make changes in the future. Performance optimization is similar in that I make changes that won't change the functionality other than its timing.
Martin Fowler: Right. They're similar, but I think sufficiently different to warrant different names.
Bill Venners: You wrote in Refactoring: "Refactoring helps improve the design of software." How does refactoring improve the design?
Martin Fowler: I don't think I can give you a general answer, but you can look at the individual refactorings and the way they improve the design. The Extract Method refactoring improves the design by taking a long, convoluted method and breaking it down into smaller methods. What's left of the old method then reads like documentation: a list of things you do by calling the smaller methods.
Every refactoring has some element of improving a design. Many are context specific. Many refactorings come in opposing pairs. For example, if a method does nothing more than what the body of code says it's doing then you would inline it. Inline Method is the opposite of Extract Method. Many times context determines whether to apply one refactoring over the other.
Bill Venners: You also claim in Refactoring that refactoring helps you find bugs. How does refactoring help you find bugs?
Martin Fowler: I think refactoring helps you find bugs in a couple ways. Sometimes as you make your program easier to understand, the bugs just leap out at you. You'll be refactoring and you'll say, "Hang on, what happens if so and so?" I have found bugs that way, and so have many other people. Refactoring suddenly makes things clearer, so suddenly you can see a bug exists.
Refactoring also helps you find bugs when you're trying to fix a bug in difficult-to- understand code. It's often easier to refactor that code while you're debugging it, so you can easily find where the bug is. By cleaning things up, you make it easier to expose the bug.
Often a good debugging technique is to write focused unit tests around the area where the bug is. Of course that has the advantage that you're not just improving your understanding, but you're also building up the test base, which implies that things will stay fixed.
Bill Venners: In Refactoring, you list several problems with refactoring, including situations when you shouldn't refactor. How do you decide when you should start from scratch and throw away existing code versus refactor it?
Martin Fowler: The answer is: I don't really know. If you have no tests and cruddy code, then you should probably throw it away and start again because you'll have to do all the testing, as opposed to if you have cruddy code with many tests. If the code is riddled with bugs, then behavior-preserving transformations will of course preserve the bugs, so that might be an argument against refactoring. I think the answer to your question also changes as your comfort level with refactoring increases. As you become more confident with refactoring, you'll want to refactor something that you'd otherwise want to rewrite because you're more skilled at refactoring.
Bill Venners: I was once in a situation where I was one of two consultants whose job it was to get a problem app working. I decided to throw away the piece I was responsible for and start over. The other consultant attempted to refactor, but ultimately his piece never became stable. Eventually I took that piece over, threw it away, and rewrote it from scratch. It seems like at some point, if the code just has no structure, it's more effective to rewrite it than refactor it.
Martin Fowler: It might not be a bad idea to spend some time refactoring it to see how much progress you can make before deciding to rewrite it from scratch.
Refactoring: Improving the Design of Existing Code, by Martin Fowler with Kent Beck, John Brant, William Opdyke, and Don Roberts is at Amazon.com at:
A catalog of summaries of refactorings mentioned in the book, Refactoring:
A refactoring portal maintained by Martin Fowler contains links to refactoring tools and other refactoring sites:
Martin Fowler's links to extreme programming resources:
Articles written by Martin Fowler about XP and agile methods:
Patterns of Enterprise Application Architecture, by Martin Fowler is at Amazon.com at:
UML Distilled: A Brief Guide to the Standard Object Modeling Language, by Martin Fowler and Kendall Scott is at Amazon.com at:
Planning Extreme Programming, by Kent Beck and Martin Fowler is at Amazon.com at:
Analysis Patterns: Reusable Object Models , by Martin Fowler is at Amazon.com at:
Martin Fowler's website contains many articles, book chapters, and other information from Martin:
Have an opinion? Readers have already posted 10 comments about this article. Why not add yours?
Bill Venners is president of Artima Software, Inc. and editor-in-chief of Artima.com. He is author of the book, Inside the Java Virtual Machine, a programmer-oriented survey of the Java platform's architecture and internals. His popular columns in JavaWorld magazine covered Java internals, object-oriented design, and Jini. Bill has been active in the Jini Community since its inception. He led the Jini Community's ServiceUI project that produced the ServiceUI API. The ServiceUI became the de facto standard way to associate user interfaces to Jini services, and was the first Jini community standard approved via the Jini Decision Process. Bill also serves as an elected member of the Jini Community's initial Technical Oversight Committee (TOC), and in this role helped to define the governance process for the community. He currently devotes most of his energy to building Artima.com into an ever more useful resource for developers.
Artima provides consulting and training services to help you make the most of Scala, reactive and functional programming, enterprise systems, big data, and testing.