Entropy Reduction

A Conversation with Luke Hohmann, Part II

by Bill Venners
March 22, 2004

Luke Hohmann talks with Bill Venners about entropy reduction, the cost of change, and programming as choreography.

Luke Hohmann is a management consultant who helps his clients bridge the gap that often exists between business and technology. In his past experience, he has played many of the varied roles required by successful software product development organizations, including development, marketing, professional services, sales, customer care, and business development. Hohmann currently focuses his efforts on enterprise class software systems. He is the author of Journey of the Software Professional: A Sociology of Software Development (Prentice-Hall, 1997), which blends cognitive pyschology and organizational behavior into a model for managing the human side of software development. He is also the author of Beyond Software Architecture: Creating and Sustaining Winning Solutions (Addison-Wesley, 2003), which discusses software architecture in a business context.

On March 8, 2004, Bill Venners met with Luke Hohmann in Sunnyvale, California. In this interview, which will be published in multiple installments on Artima.com, Hohmann discusses software architecture in the context of business.

  • In Part I: Growing, Pruning, and Spiking Your Architecture, Hohmann discusses architecture and culture, the importance of completeness in new architectures, and implementing features in spikes.
  • In this second installment, Hohmann discusses entropy reduction, the cost of change, and programming as choreography.

Technical Debt and Entropy Reduction

Bill Venners: In your book, Beyond Software Architecture, you write:

Architecture degradation begins simply enough. When market pressures for key features are high and the needed capabilities to implement them are missing, an otherwise sensible engineering manager may be tempted to coerce the development team into implementing the requested features without the requisite architectural capabilities.

You then advise scheduling some time after each release for "entropy reduction" to pay off technical debt accumulated during the push to get the release out the door. What is entropy reduction?

Luke Hohmann: In most of the enterprise systems I've built, I could show you where it is a model of really well done design. It has three tiers like the books recommend. It responds and adapts well to the needs of the market. It works really well. But in that same system, I could show you part of it that is a complete and utter, ugly hack. It was 2AM and we had to ship in three days, and we just hacked that part together.

I think refactoring coupled with the right things, automation and test-driven design, is wonderful. They all work together to help make software more changeable—and more safely changeable—than it was in the past. But when it's time to ship and you've got to hit code freeze, that refactoring stuff is the least important thing in my mind.

You brought up the concept I call entropy reduction, which is quite possibly an idiosyncratic way that I like to run projects. After a release, I spend some time trying to hold the features of the system constant and just clean up the inside a little bit. Before I start on the next release, I take the known technical debt, the known entropy that was introduced, and clean it up. It's like putting a hold on refactoring to get the release out the door, and then turning on refactoring again afterwards. Because you have to turn refactoring off if you're going to ship.

Dave Quick from Microsoft once told me that the way he refers to this idea is that if you actually want to ship, shipping becomes a feature. You've got a list of features you want to hit. If you're going to ship, shipping becomes a feature as well, either explicitly or implicitly. You need to do what is necessary to get that feature done too.

Entropy reduction means I am going to make whatever compromise I need to make to get the release done, and then I'm going to give the team the opportunity to go back and respond or recover from the compromises they made. It is a very different concept than refactoring. I think of refactoring as non-architectural, but rather very much in the context of normal work.

The Cost of Change Curve

Bill Venners: When you say entropy reduction, are you talking about looking at the architecture as a whole and cleaning up problems with the big picture?

Luke Hohmann: Yes, you can make big picture changes during entropy reduction, but typically I don't want that to happen. I don't want either entropy reduction or refactoring to be architecture-level changes. Nevertheless, it does sometimes happen, and that's when the cost of change curve can jump high again.

One of the things I bring to the table in terms of real world experience is that in several cases I've been with the same architecture over multiple releases. In one example I was with the same product over four years and six releases. In the XP model, the cost of change curve rises for a couple of releases, and then levels off. [See Figure 1.] After that, XP says the cost of change will be constant, but in reality that's a very simplistic point of view. In the first release, you're proving out your architecture. In the second release, you'll have started getting feedback from your initial users about what is working and what isn't, and you might have some changes that are fairly important to make. But once you are in your third release, in the XP model, you have most everything figured out, right? What happens if in the fourth release of a heavy client/server application, marketing says, "I've got to crack open the mobile market, because mobile's huge." The curve will jump.

Figure 1. A Flat Cost of Change Curve

In the initial phase of the curve, you are getting the grooves in the road of your architecture and proving it out. Eventually you get it down, but sometime later you may hit some major problem, some major architectural flaw. At that point, you're going to see that curve start all over again. The cost of change curve is going to have another big jump, because you are engaging in architectural-level refactoring, which is touching lots of components and lots of systems, and possibly even replacing many of them from scratch [See Figure 2.] Over the lifetime of a product, the cost of change is several XP-like curves, each of which is relative to a particular architecture.

Figure 2. A Bumpy Cost of Change Curve

For example, when marketing says that in addition to the heavy desktop client you currently support, you also need to support Palm computers, guess what? Your cost of change curve just flew up high again. These cost of change jumps are usually correlated with significant architectural infrastructure change, because you don't have any infrastructure for your tests. You don't have any infrastructure for your database. You're probably learning something new in the development team. Your developers don't know how to program Palm. Are you going to throw out all the people or keep them? If you keep them, they've got to learn it. They don't know the idioms.

Think of all of the infrastructure that helps keep your cost of change curve low, which you can get to not only in XP but also in other methods. In your existing system, you have all your tests and your test database. Your build infrastructure is all set. Your documentation infrastructure works. You've figured out your naming conventions and tagging conventions. You've figured out how to link your help file to your online documentation. You have all that figured out, and the cost of change curve is low, because a big part of the high cost of change is creating this infrastructure, of figuring this stuff out. Now you don't have all the infrastructure for Palm support, or whatever the new requirement is, so your cost of change curve jumps again. I applaud the concept of the flattening cost of change curve, because it's right. You should be able to achieve that flattening after the first release or two. It's just not true for a mature product over its entire lifecycle.

Programming is Choreography

Bill Venners: You write in your book, "An architecture is like a carefully designed garden, and needs care and feeding." Can you elaborate?

Luke Hohmann: I think many people think about software architecture the way they think about city streets. When you build city streets, you come, you pave, and you come back in six years, after a lot of heavy wear and tear, and you patch a few potholes. And after another four years of heavy wear and tear, you scrape it all and you repave. That very inorganic view of the world is not how I think of modern software architecture. Modern software architecture is much more akin to an English garden that you plan and you plant and you weed and you improve. Yes, there's an overall plan, but the result requires more tending. I think that tending is one of the things that frustrates executives. Why does software require tending? Things shift on you. Your database just upgraded, you have XML infrastructure to deal with, the customer wants a middleware option, and an operating system just changed. There's more shifting in software than people realize.

Now, we do actually repave city streets and fix potholes, but the city streets metaphor is more about mindset. To me, the mindset of the city street builder is, I'm going to pave it and I hope to God I don't have to come back for ten years, because that's my cost model. Whereas to me, my expectation is that you are tending and nurturing this software architecture that you have created.

I'm cautious about using Alexandrian style building architecture as a metaphor for software development, for a couple of reasons. One is that similar to city streets, building architecture is very static. But number two is that I think developers deal with space and time differently, especially time, than architects. I find that dance choreography is a much better analogy. Choreography is much more akin to what software people do, because dance has elements that move relative to each other in time and space. In building architecture, you design a structure and how things move inside it. In dance, the structure is amorphous. It's called a stage. Yes, it has sides and a back wall, but that's about it. And the choreographer has these elements, these objects and data structures, which are interacting with themselves, and themselves changeable. To me, that is a much closer analogy to what a software architect does than building architecture. I think the Alexandrian patterns are great and wonderful, but I don't want to take them too far. I am definitely a person who wants to bring the organic metaphors, the movement, the ability for things to interact, to software. I think those metaphors are more compelling and powerful, and more accurately describe what we do.

Next Week

Come back Monday, March 29 for the next installment of this conversation with Luke Hohmann. If you'd like to receive a brief weekly email announcing new articles at Artima.com, please subscribe to the Artima Newsletter.


Luke Hohmann is author of Beyond Software Architecture: Creating and Sustaining Winning Solutions, which is available on Amazon.com at:

Luke Hohmann is the author of Journey of the Software Professional: The Sociology of Software Development, which is available on Amazon.com at:

The Pragmatic Programmer's home page:

A good place to start looking into Extreme Programming is:

Talk back!

Have an opinion? Readers have already posted 7 comments about this article. Why not add yours?

About the author

Bill Venners is president of Artima Software, Inc. and editor-in-chief of Artima.com. He is author of the book, Inside the Java Virtual Machine, a programmer-oriented survey of the Java platform's architecture and internals. His popular columns in JavaWorld magazine covered Java internals, object-oriented design, and Jini. Bill has been active in the Jini Community since its inception. He led the Jini Community's ServiceUI project that produced the ServiceUI API. The ServiceUI became the de facto standard way to associate user interfaces to Jini services, and was the first Jini community standard approved via the Jini Decision Process. Bill also serves as an elected member of the Jini Community's initial Technical Oversight Committee (TOC), and in this role helped to define the governance process for the community. He currently devotes most of his energy to building Artima.com into an ever more useful resource for developers.