Sponsored Link •
Ward Cunningham talks with Bill Venners about the flattening of the cost of change curve, the problem with predicting the future, and the program as clay in the artist's hand.
In the software community, Ward Cunningham has a reputation for being a font of ideas. He invented CRC Cards, a technique that facilitates object discovery. He invented the world's first wiki, a web-based collaborative writing tool, to facilitate the discovery and documentation of software patterns. Most recently, Cunningham is credited with being the primary inspiration behind many of the techniques of Extreme Programming.
On September 23, 2003, Bill Venners met with Ward Cunningham at the JAOO conference in Aarhus, Denmark. In this interview, which will be published in multiple installments on Artima.com, Cunningham gives insights into wikis and several aspects of Extreme Programming.
Bill Venners: In Extreme Programming Explained, Kent Beck wrote, "One of the universal assumptions of software engineering is that the cost of changing a program rises exponentially over time," and suggests that, "With a combination of technology and programming practices, it is possible to experience a curve that is really quite the opposite." How can the cost of change curve be flattened?
Ward Cunningham: Traditionally, the cost of change curve said that if we detect the need for a change early, it costs less to make the change than if we detect the need late. I tackled that curve by saying, let's almost intentionally make mistakes so we can practice correcting them. That practice will help reduce the cost of making changes late.
Our feeling was that the limiting factor on any change was not when it was done, but how much thinking was required. If we made a change during week one, and it took us two days to understand what was really required, it took two days to make the change. If we made a change during week 21, and it took us two days to understand what was really required, it took us two days to make the change.
In week one, we might have had to write 20 statements. In week 21, we might have had to write 20 statements and change four. But if you practice making changes, the time it takes to change four statements is not that great. You go find the statements and you change them. It might take a minute.
So understanding the need for the change was the limiting factor. Programming it was inconsequential. Once we understood a change, we could program it—early or late. The cost of actually changing the code did not dominate the programming. The dominant cost was time it took to understand what was required, and that gave us a flattening of the cost of change curve.
Many people are afraid of changes because although they understood the code when they wrote it, that understanding has disappeared. They'll tell you, "We worked so hard on these statements. Whatever you do, don't change these statements!" They don't want to go back to that code, because regaining the understanding would be too costly. So another way to help flatten that cost of change curve, to make change cost no more later than it does now, is to decide that people must be able to look at what they're going to change and understand it. Therefore, when you write code, you're writing more to the person who's going to be reading the code than to the machine that's going to run it.
And again you don't want to write a big comment that tells others how to make a change they might want to make, because you don't know what change they're going to want to make. Better to have the attitude that you can't help future programmers make their changes . All you can do is make it easy for them to understand what you were trying to do. And it will be easiest for them to understand what you were trying to do if you were very careful to not try to do too much. The more you try to do, the more work future programmers will have understanding what you've done.
For example, if you just blatantly ignore a situation that a future programmer needs to deal with, they will come in to the code and discover that you blatantly ignored the situation. That means they have the freedom to do whatever is required. But if you tried to account for the situation, they will come in and realize it isn't working. They'll see you tried to account for it, so they'll first attempt to understand what you were trying to do. Once they understand what you were trying to do, they can figure out how to change it to do what they know they need to do. They would much rather come in and discover you didn't even think about it. Maybe you did think about it, but you did not program one bit of it.
Bill Venners: Everyone would likely agree that predicting the future is difficult, but is it always a bad idea?
Ward Cunningham: In science it's easy to predict the future. Science is built upon studying the behavior of physical systems, which turn out to be, maybe short of the weather, pretty amazingly predictable. We test a theory by testing its predictive capability. The fact that we can shoot a rocket out into space and cause it to orbit—that is the epitome of prediction. But when we start talking about what will be desired in the future, we might have some instincts and they might be right, but they won't always be right. And we have to attend to the times when they aren't right.
I love it when a new requirement comes in, and we look at it and say, "Well, that's easy. The program is made to do that." We put the pieces into the program, and it just fits. I hate it when a new requirement comes in that doesn't fit nicely, as if the program were designed to make the requirement hard. In that case, we have a lot of work to do. But the nature of the work is first changing the program so the new requirement is an easy fit, and then doing the easy work to incorporate the requirement. In other words, instead of patching the new requirement onto an architecture that was not made to accommodate it, just buckle down and do the hard work to change the architecture so the requirement is easy to implement. The patch approach means that the next guy who comes along will have to understand both the system that wasn't made to do the new requirement, and the patch that tried to overcome that system without changing it. It's much better to change the system to accommodate the new feature easily.
Now somebody might say, "Why don't we look forward, look at all the work we have to do? Why don't we design a system that makes all work easy from the beginning?" And if you can pull that off, that's great. It's just that, over and over, people try to design systems that make tomorrow's work easy. But when tomorrow comes it turns out they didn't quite understand tomorrow's work, and they actually made it harder.
Bill Venners: To tackle the cost of change curve, you found a way to make it practical to make changes all the way through the lifetime of a project. And that made it less important to plan for the future, because you could make changes when they were actually needed as the future unfolded. Does an overall architecture simply emerge through the process of focusing only on each small step?
Ward Cunningham: I like the notion of working the program, like an artist works a lump of clay. An artist wants to make a sculpture, but before she makes the sculpture, she just massages the clay. She starts towards making the sculpture, and sees what the clay wants to do. And the more she handles the clay, the more the clay tends to do what she wants. It becomes compliant to her will.
A development team works on a piece of code over several months. Initially, they make a piece of code, and it's a little stiff. It's small, but it's still stiff. Then they move the code, and it gets a little easier to move. In a project I mentioned earlier [in Part II], we added a schema evolution capability to our database. It softened up the program. It was much easier to change. Every time we did a schema change and we got better at it. The programmers and the code—as a unit—softened up. We worked the program, and we kept it flexible.
At the end of the project you have done everything that needs to be done—everything that somebody has paid for anyway—and you look at the code and ask, "What is in the core of this lump of stuff?" You ask, "How did it turn out? After we worked the program day in and day out, how did it end up?" Often, the program ends up amazing. You'll say, "This is beautifully architected." Well, where did that architecture come from?
In this case, architecture means the systematic way we deal with diverse requirements. Architecture allows us, when we go to do work we need to do on the program, to find where things go. It is a system that was worked into the program by all the little decisions we made—little decisions that were right, and little decisions that were wrong and corrected. In a sense we get the architecture without really trying. All the decisions in the context of the other decisions simply gel into an architecture.
Come back Monday, January 12 for the next installment of this conversation with Ward Cunningham. If you'd like to receive a brief weekly email announcing new articles at Artima.com, please subscribe to the Artima Newsletter.
Bo Leuf and Ward Cunningham are the authors of The Wiki Way: Quick Collaboration on the Web, which
is available on Amazon.com at:
Portland Pattern Repository:
Information on CRC (Class-Responsibility-Collaboration) Cards:
XProgramming.com - an Extreme Programming Resource:
PLoP, the Pattern Languages of Programming conference: