Luke Hohmann is a management consultant who helps his clients bridge the gap that often exists between business and technology. In his past experience, he has played many of the varied roles required by successful software product development organizations, including development, marketing, professional services, sales, customer care, and business development. Hohmann currently focuses his efforts on enterprise class software systems. He is the author of Journey of the Software Professional: A Sociology of Software Development (Prentice-Hall, 1997), which blends cognitive pyschology and organizational behavior into a software development model for managing the human side of software development. He is also the author of Beyond Software Architecture: Creating and Sustaining Winning Solutions (Addison-Wesley, 2003), which discusses software architecture in a business context.
On March 8, 2004, Bill Venners met with Luke Hohmann in Sunnyvale, California. In this interview, which will be published in multiple installments on Artima.com, Hohmann discusses software architecture in the context of business. In this first installment, Hohmann discusses architecture and culture, the importance of completeness in new architectures, and implementing features in spikes.
Bill Venners: What is architecture?
Luke Hohmann: That's like asking, what is culture? Culture is the way you do things in a group of people. Architecture is the way you do things in a software product. You could argue by analogy, then, that architecture is to a software product as culture is to a team. It is how that team has established and chosen its conventions.
Which leads us inevitably to the question of "goodness". How do you know if an architecture is good? Consider an architecture that isn't built using a strong domain model, and instead relies heavily on stored procedures. That might be OK, or it might not be OK. You could have decided that part of your architecture is to use a really strong domain model and not use stored procedures, right? So an architecture is some reasonable regularity about the structure of the system, the way the team goes about building its software, and how the software responds and adapts to its own environment. How well the architecture responds and adapts, and how well it goes through that construction process, is a measure of whether that architecture is any good.
Bill Venners: In your book, Beyond Software Architecture, you write, "The system architecture determines how hard or easy it is to implement a given feature. Good architectures are those in which it is considered easy to create the features desired." That made sense to me, in that the way to judge whether an architecture is good is whether the architecture is good for the purposes to which it is applied.
Luke Hohmann: The definition of goodness has to be related to fitness for purpose. Is this glove good? I don't know. What are you doing with the glove? Are you throwing snowballs, cooking barbeques, or playing golf? There's a set of changes that are going to occur to a software system over time. Probably the utilitarian or most useful definition of goodness is the answer to this question: are the changes that will keep this system successful in this domain in this product line relatively easy? If they are, then it's probably a good architecture.
Bill Venners: You write in Beyond Software Architecture:
The initial version of an architecture can be like a young child: whole and complete, but immature and perhaps a bit unsteady. Over time, and through use and multiple release cycles, the architecture matures and solidifies, as both its users and its creators gain confidence in its capabilities while they understand and manage its limitations. The process is characterized by a commitment to obtaining honest feedback and responding to it by making the changes necessary for success. Of course, an immature architecture can remain immature and/or stagnate altogether without feedback. The biggest determinant is usually the market.
What is the difference between maturity and completeness?
Luke Hohmann: In its first version, an architecture should be like a baby. It is whole and complete. It has ten fingers and ten toes. It has all its systems, but it is immature. You wouldn't expect it to do certain things that you might expect a mature system to do. Of course, the definition of what is complete and incomplete, mature and immature, is subjective. But it is really important to distinguish between incompleteness and immaturity. I see the people make the mistake made all the time of building an incomplete system. It's actually not a mistake that's correlated to a development process, near as I can tell. You can have iterative, agile projects that produce incomplete results, and you can have waterfall or traditional kinds of projects that produce incomplete results.
Bill Venners: Why is it important to aim for completeness?
Luke Hohmann: I'll give you a simple example from a software product standpoint, which is the bulk of my experience. If you were building a reservation system, you might in your initial release have the ability to create a reservation and delete a reservation. That would actually be a complete system, because if you needed to modify a reservation you could delete it and add it again in modified form. By contrast, I've seen people actually create systems in which you could add and modify a reservation, but not delete a reservation.
Bill Venners: And that's not complete, because if it were complete, I could delete. So you're saying that we should shoot for something that's complete, even if not mature.
Luke Hohmann: Right. You always want to shoot for completeness. You want to have a system that's complete, but not necessarily mature. Maturity in this example is that it is obviously easier to use an edit than a delete and add. In one approach, the first sequence of iterative development creates something that's whole and complete, but immature. In the other approach, it creates something that's incomplete,...
Bill Venners: ...but has some aspect of maturity.
Luke Hohmann: Yes. The second approach gives you an idiot savant, and that's what you don't want.
Bill Venners: In the end of the paragraph from your book that I quoted earlier, you wrote about the maturing process that, "The biggest determinant is usually the market." Why is the determinant the market?
Luke Hohmann: The abstract definition of the market would be the users who are actually using the system. Most of my experience is actually building products or consulting for companies who actually build products for sale, as opposed to companies who build IT products for internal use. Concretely, if PeopleSoft were one of my customers, when I say market, I mean PeopleSoft's customers of their ERP system. That's the market that's going to determine whether or not that architecture is going to solidify and grow, or whether it's going to die altogether.
Bill Venners: And you're saying that listening to the feedback from the market of users is what will help the architecture mature.
Luke Hohmann: Yes, but not only that. Two kinds of feedback help systems mature. If you are building an enterprise software product, and you have something that's working or headed in the right direction, the feedback you're going to get from the market is, "This is a good feature. I want you to do more of it. This is a good thing." And if you have something going in the wrong direction, you'll hear about that too. That's one kind of feedback.
The other kind of feedback that you need to get is internal. When a given stakeholder, like technical support, says, "We are consistently having a problem here, and we need you to modify the system this way so we can deal with it," that's typically where you are dealing with all those things architects don't think they need to deal with, but which are vital for success. Upgrades, installs, log files, configuration—the architecture has a huge impact on these things.
You can visualize the features of a product as a tree, where the infrastructure is the roots and the main features are the heavy branches. As you look at what you're putting into the next release, you're actually talking about growing your tree, and in the process you also want to prune the tree. To me, architectures are always organic. Some branches will live and others die. You can prune plants for different shapes, and the market and internal feedback informs your decisions.
Bill Venners: In my experience, I have found that if you give people something, even if only 2% of your users want it, it is hard to take it away, to prune it, because they actually use it for something important to them. How do you prune in practice?
Luke Hohmann: That's typically the role of the product manager. A good product manager will actively manage the feature set. I've played that role, so I've made the decision to remove features. It's not necessarily easy, and you will piss people off. One factor that you have to weigh in your decision making process is just how important is that particular 2% of your market. If it is a non-vocal portion that doesn't account for much revenue, then dropping the feature might be good for you (less code to maintain lowers your costs). If it is 2% of your market, but 16% of your revenue, then you'll need to approach the decision more carefully.
Dropping a feature doesn't have to be complex, and chances are good that you've already done it. It can be as simple as dropping support for an outdated operating system. You're not going to run on NT 3.5.1 right now, so the ripple upgrade effect of Microsoft forcing you to upgrade is in a sense removing a feature. For some customers, who are still running Windows NT 3.5.1, that's a problem. People do prune the product trees. The trouble is they often just don't do it well enough. Bad pruning is one of the great downfalls of many architectures. The vestiges of unruly bush undergrowth bog the architecture down.
Bill Venners: In your book, Beyond Software Architecture, you write: "The books tell us that when the system is completely new and the problem terrain is somewhat unfamiliar, designers should, for example, 'explore alternative architectures to see what works best.' This is sensible advice. It is also virtually useless." Why is it useless advice?
Luke Hohmann: Because, especially for a software product vendor, you're not going to be funded either internally or by a venture capitalist to build four versions of an architecture and then figure out which one works well. That attitude may be appropriate in academic circles, but it has no bearing with any kind of monetary payback value. Having worked at start-ups, I can tell you that you've got to get the product done. You've got to get it out there, because you've got cash left for X number of months, and if you don't get it done you're not going to survive. "Build a few alternatives and pick the best one," is just not how it works.
Bill Venners: A bit later in your book, you write:
Imagine that you are an explorer and you've just crossed a mountain pass and entered a strange new land. You have at your disposal a variety of gear that can help you navigate mountains, valleys, deserts, rain forests, deep lakes, and fast-moving streams. You also have a small, single-person helicopter for reconnaissance. Do you just jump in and start exploring, or do you fire up the helicopter, take a look around, and plot your course? Good explorers fire up the helicopter and plot their course.
I assume the reconnaissance mission contrasts with going on four different adventures and then coming back and saying, "OK, let's go this way."
Luke Hohmann: That would be the right analogy. There's a difference between fully exploring alternatives and taking a scouting trip that's relatively low cost and gives you an overview. On the scouting trip, you check out the domain and the lay of the land. You still could get complete unknowns, and you still may have chosen to go west when you should have gone north. But you're not going to go north and take all that data, then go west and take all that data, then go south and take all that data, and then figure out which one's right.
Bill Venners: In your book you write, "Whatever the structure of your architecture, build it in spikes—that is, user-visible functionality driven through all layers or subsystems." Could you explain what "build in spikes" means?
Luke Hohmann: Any architecture that I've ever been associated with, whether it had three tiers or 38 tiers, had some concept of layering or subsystems. There are very few architectures that are just one blob of massive crap. There is usually some concept of subsystems somewhere. Bear in mind that 95% of my career has been building business systems. I'm not the guy building embedded devices. I'm not the guy building the Space Shuttle control software. I'm the guy who predominantly has a user interface, a service-oriented interface, domain model, some mapping layer, and persistent data. There's some stack there.
The way I like to envision adding functionality to a system is "spiking the architecture." The way to do it, in my opinion, is to take one use case and push it all the way through. This is what all the agile methods say, and I call that a spike. Other people call it other things. The reason I call it a spike is that I think it is equivalent, in terms of effort half the time, of lining up five boards, putting a spike in, and trying to drive it through. It's tough sometimes to get it all to work out.
Spiking the architecture contrasts with building one layer at a time. You read about it and you think, people don't do this—yeah, they do. I've seen teams do it. They build a whole layer, build the whole flat layer, and then try to put the next layer on, and the next layer. They find out they don't work, and they have to go way back to the beginning and redo it. After the first release in one of my development organizations, I gave the team a bronze railroad spike as part of their prize for the release.
Bill Venners: The Pragmatic Programmers, Dave Thomas and Andy Hunt, talk about firing "tracer bullets." If there's an area of risk, you fire a tracer bullet all the way through the architecture just to make sure your concepts will work, and you learn things by doing that. In your spike metaphor, you're talking about how to implement each feature. If you need to implement three features, you don't support all three in one layer, then all three in the next layer. You implement one feature through all layers, then implement the next feature through all layers.
Luke Hohmann: Well, Dave and Andy are great guys, and their description of tracer bullets matches nicely with my own description of spiking. The concept isn't terribly new, and different people or groups call it different things. Extreme Programming, for example, would say that you take one user story and drive it through to business value. The benefit of that is that it's controllable and riskable. By "controllable," I mean you can limit the scope to make certain you're pushing something through. By "riskable", I mean that you can determine the level of risk associated with the spike. Simplifying keeps it under control and helps manage risk.
Unfortunately, most approaches to building in spikes (or tracer bullets, or stories, or use cases, or whatever) have a big weakness: They don't provide much advice on how to really structure the sequence of spikes or delivery of functionality. I've got seven user stories—why should this one be first?
Figuring that out order is the role of a product manager. A good product manager will say, "You should do this one first, because the market will pay the most for this one. It's the most valuable in the market. If we do this one first and this one second, we get a leg up on our competition." People say, "The business people are supposed to tell me what they want done, and I figure out how." I say, "Yeah, that's right, but that's not enough. They should also give you insights on the ordering, because ordering is a business decision." And the way I marry that is that I require business people give me an ordinal ranking of the features (this is number 1, this is number 2, and so on) along with what I call the the "cut line"—the minimal set of features that are needed in the release to be considered successful along with an ideal release target window. Before I actually agree to the final set of deliverables, the development team does a dependency pass. The business person may say that this feature is the third on her list, but based on a technical dependency analysis, that feature actually has to be done first. And usually you have at least two passes at that.
So it's not just saying to the business person, "Tell me what your user stories are and rank them." It is, "Tell me what your user stories are, rank them, let me come back with a pass through them from a dependency perspective, and then we'll negotiate the cut line or the release set." The release set has to have the minimum set of features that you can go to market with and be successful, plus the set of dependencies that make that whole and complete, as opposed to immature. Oh, one last thing. The features have to be ordinally ranked—not a group of "five important things" but, quite literally, "this is number one, this is number two".
Come back Monday, March 22 for the next installment of this conversation with Luke Hohmann. If you'd like to receive a brief weekly email announcing new articles at Artima.com, please subscribe to the Artima Newsletter.
Luke Hohmann is author of Beyond Software Architecture: Creating and Sustaining Winning Solutions, which is available on Amazon.com at:
Luke Hohmann is the author of Journey of the Software Professional: The Sociology of Software Development, which is available on Amazon.com at:
The Pragmatic Programmer's home page:
A good place to start looking into Extreme Programming is:
Bill Venners is president of Artima Software, Inc. and editor-in-chief of Artima.com. He is author of the book, Inside the Java Virtual Machine, a programmer-oriented survey of the Java platform's architecture and internals. His popular columns in JavaWorld magazine covered Java internals, object-oriented design, and Jini. Bill has been active in the Jini Community since its inception. He led the Jini Community's ServiceUI project that produced the ServiceUI API. The ServiceUI became the de facto standard way to associate user interfaces to Jini services, and was the first Jini community standard approved via the Jini Decision Process. Bill also serves as an elected member of the Jini Community's initial Technical Oversight Committee (TOC), and in this role helped to define the governance process for the community. He currently devotes most of his energy to building Artima.com into an ever more useful resource for developers.