Article Discussion
Working the Program
Summary: Ward Cunningham talks with Bill Venners about the flattening of the cost of change curve, the problem with predicting the future, and the program as clay in the artist's hand.
25 posts.
The ability to add new comments in this discussion is temporarily disabled.
Most recent reply: January 13, 2004 1:53 PM by Isaac
    Bill
     
    Posts: 409 / Nickname: bv / Registered: January 17, 2002 4:28 PM
    Working the Program
    January 4, 2004 9:00 PM      
    Ward Cunningham talks with Bill Venners about the flattening of the cost of change curve, the problem with predicting the future, and the program as clay in the artist's hand.

    Read this Artima.com interview with Ward Cunningham:

    http://www.artima.com/intv/clay.html

    What do you think of Ward's comments?
    • Vincent
       
      Posts: 40 / Nickname: vincent / Registered: November 13, 2002 7:25 AM
      Re: Working the Program
      January 6, 2004 5:29 AM      
      > "If we made a change during week one, and it took us two days to understand what was really required, it took two days to make the change. If we made a change during week 21, and it took us two days to understand what was really required, it took us two days to make the change."

      This is a disappointingly disingenuous statement. It says that a change that takes two days at the beginning of a project takes the same time as a change that takes two days, late in a project. This is self-evident but is not the problem that the "exponential cost of change" scenario addresses. The problem is the cost of making a given change early in the development cycle, as compared with the cost of making the SAME CHANGE later in the project.

      At the beginning of a project, the development team and the code-base are both relatively small. Bugs at this stage are easy to find and easy to fix. Similarly, design changes are easy to implement. Implications on other as yet unwritten parts of the system will be zero because the other parts do not yet exist. As they are developed, they will - by definition - work with changes that preceded them. Thus, we might have the two day fix that was mentioned.

      However, later in the project, the number of developers and the code-base will likely both be larger. By this time THE SAME BUG may be very much more difficult to understand, particularly if the problem has been handed off to a developer who was not involved in the original code. Not only that, but the implications on other parts of the system are unpredictable because those other parts were developed in the absence of the code change and may not be able to work with it without further changes. This is what the well documented exponential cost of change problem is and, unfortunately, the given answer skirts the issue and does nothing to address it.

      Vince.
    • rubyfan
       
      Posts: 8 / Nickname: rubyfan / Registered: January 1, 2004 4:07 PM
      modelling clay vs. marble
      January 6, 2004 3:25 PM      
      Another aspect of this topic that I would like to see more discussion on:

      In my experience programming in a dynamically-typed language makes it much easier to keep your architecture more flexible and amenable to a change in requirements (which always happens). It seems like statically typed languages like C++ or Java are marble in this analogy, while dynamically typed languages like Ruby, Lisp or SmallTalk are modelling clay. It's much easier to make changes to a sculpture made with modelling clay than it is with marble - often a mistake with marble will require that you throw out the sculpture and start over.

      I'm currently converting a project which was originally done in C++ to Ruby. The project should probably have been done in a 'scripting' language initially since it involves a lot of string processing with regexen and the production of html from a template (and it has no requirements for high performance). Since I inherited the project about a month ago, there have been some changes in requirements which will essentially cause us to eventually throw out the C++ code since it is not able to be adapted easily (it would take longer than the Ruby re-write). Hence the impetus to rewrite in Ruby. Using Ruby we'll be able to make the system much more flexible with much less work (productivity will be greatly improved) since it won't be tied to certain types when changes are made.
      • Tim
         
        Posts: 10 / Nickname: tpv / Registered: December 9, 2002 1:41 PM
        Re: modelling clay vs. marble
        January 6, 2004 6:49 PM      
        > In my experience programming in a dynamically-typed
        > language makes it much easier to keep your architecture
        > more flexible and amenable to a change in requirements

        I hear that a lot, but my experience doesn't match.

        A statically-typed lanaguage will make the implications of your change immediately apparent. When you make the change code no longer compiles.

        In a dynamically-typed language, the implications are still there, but you may only find them by testing.

        I've found strong, compile-time typing helps make changes in a robust way, because I can see a large proportion of the effects immediately. When the type checks only occur at runtime, I am less confident about the changes because I cannot easily tell how deeply my change will impact the system as a whole. The tests will show me, but the feedback loop is slower.
        • rubyfan
           
          Posts: 8 / Nickname: rubyfan / Registered: January 1, 2004 4:07 PM
          Re: modelling clay vs. marble
          January 6, 2004 7:24 PM      
          A statically-typed lanaguage will make the implications of your change immediately apparent. When you make the change code no longer compiles.

          In a dynamically-typed language, the implications are still there, but you may only find them by testing.

          I've found strong, compile-time typing helps make changes in a robust way, because I can see a large proportion of the effects immediately.


          Oh, I'm not talking about weak typing. For example in Ruby if an object doesn't respond to a message that you're sending it you're going to get an error - we tend to refer to this as 'duck-typing' in the Ruby community (if it walks like a duck and quacks like a duck, I don't care what class it is - just so long as the interface for the class conforms to the way I want to use the object) Sure it will have to be at run-time that you see the error. That's where unit tests come in. I know that in my initial forrays into dynamic typing I had the same worries that you're addressing, but in practice I find it's not a big issue. The productivity gains to be had with dynamic typing seem to, in practice, be greater than the comforts offered by static, compile-time type checking. (Yes, I know this sounds like heresy in some quarters, you've probably got to try it out to see how it works for you. :-)
      • Isaac
         
        Posts: 51 / Nickname: igouy / Registered: July 10, 2003 7:42 AM
        Re: modelling clay vs. marble
        January 6, 2004 4:41 PM      
        It's much easier to make changes to a sculpture made with modelling clay than it is with marble - often a mistake with marble will require that you throw out the sculpture and start over.
        Where-as we would just flatten the modelling clay? (Good analogies are hard to find.)

        The project should probably have been done in a 'scripting' language initially since it involves a lot of string processing with regexen and the production of html from a template (and it has no requirements for high performance).
        scripting language and/or template language
        http://today.java.net/pub/a/today/2003/12/16/velocity.html

        (Humbly suggest this has everything to do with choosing the wrong tool for the job, and not that much with change.)
        • Celia
           
          Posts: 6 / Nickname: redmore / Registered: June 24, 2003 0:14 PM
          Re: modelling clay vs. marble
          January 6, 2004 5:24 PM      
          Not only can a change made later in a project be the same price as one made early – if the product is well-designed, it can be cheaper. As you go on, the product develops a clear shape and it should be obvious how to modify it. If it isn’t, either your change is wrong headed (trying to make a train float or fly) or your original code is obscure – not properly “kneaded”.

          In a way, this is exactly what agile programming is all about. Design a minimalist product first and then add functionality.
          • Tim
             
            Posts: 10 / Nickname: tpv / Registered: December 9, 2002 1:41 PM
            Re: modelling clay vs. marble
            January 6, 2004 6:55 PM      
            > Not only can a change made later in a project be the same
            > price as one made early – if the product is well-designed,
            > it can be cheaper.

            While that may be true, Ward didn't provide a very convincing argument for why it is the case.

            As Vincent noted, Ward did a particularly bad job of identifying why the cost of change has been observed to increase, and therefore the strategies he gave for overcoming it are not sufficient.

            Agile methodologies provides a number of other practices that help keep the cost of change low, but Ward didn't really cover them, and for thoroughness he probably should have.
          • Vincent
             
            Posts: 40 / Nickname: vincent / Registered: November 13, 2002 7:25 AM
            Re: modelling clay vs. marble
            January 6, 2004 10:55 PM      
            > Not only can a change made later in a project be the same
            > price as one made early – if the product is well-designed,
            > it can be cheaper.

            I can't see any reason why this should be true. From the day the first requirement is agreed, requirements are added and changed (usually added). The code-base inevitably increases in size and complexity as it includes more functionality. Equally, the team size generally grows as does the need to interface the product with existing hardware and software and the need to keep an increasing number of customers/users abreast of the progress of the project.

            You may have the prettiest, most "Agile" code in the world but it is still increasing in complexity every time new functionality and interdependence is added.

            None of the factors listed above are addressed by the "Agility" of the code.

            Vince.
        • rubyfan
           
          Posts: 8 / Nickname: rubyfan / Registered: January 1, 2004 4:07 PM
          Re: modelling clay vs. marble
          January 6, 2004 7:07 PM      
          scripting language and/or template language
          http://today.java.net/pub/a/today/2003/12/16/velocity.html

          (Humbly suggest this has everything to do with choosing the wrong tool for the job, and not that much with change.)


          Sure, we're essentially doing the same thing with erb, a Ruby templating package. It's used heavily for examples in Jack Harrington's Code Generation in Action book. erb and velocity seem to be targetting the same problem space.

          Where-as we would just flatten the modelling clay?
          Depends on what the problem is. If you just accidentally chipped the nose off of your marble copy of David you're pretty much done. If you knocked it off of your modelling clay version, you could just stick it back on. ;-) (well, I suppose you could go looking for a tube of super glue :) And of course, the modelling clay allows for design reuse - You like the arms on your old version of the statue? It's easy to just cut them off and stick them on the new one ;-)

          (Good analogies are hard to find.)

          How true. Certainly my analogy isn't perfect. My point has to do with flexibility (agility?)

          I've done projects with both statically typed and dynamically typed languages and it just seems to me that I'm having to 'set things in stone' up front to a much greater degree with the statically typed language, while in a dynamically typed language I seem to have a lot more flexibility for a longer portion of the project. I suppose that some would argue that 'setting things in stone' up front is a good engineering practice, but given the nature of software engineering where requirements are often not 'set in stone' (and it really isn't something the engineers have much control of) it seems to me that flexibility is a plus since the only certainty is that change will happen.
          Based on my experience, it seems a lot easier to evolve a program written in a dynamically typed language, than it is with a statically typed one.
          • Isaac
             
            Posts: 51 / Nickname: igouy / Registered: July 10, 2003 7:42 AM
            Re: modelling clay vs. marble
            January 6, 2004 11:28 PM      
            I've done projects with both statically typed and dynamically typed languages
            Before this degenerates into static type checking versus dynamic type checking, let's note that the notions expressed in the article are independent of programming language.
            • rubyfan
               
              Posts: 8 / Nickname: rubyfan / Registered: January 1, 2004 4:07 PM
              Re: modelling clay vs. marble
              January 6, 2004 11:48 PM      
              the notions expressed in the article are independent of programming language.

              Perhaps. But practically speaking, the choice of language type (static or dynamic) can have a huge impact on the notion of code flexibility.
              • Isaac
                 
                Posts: 51 / Nickname: igouy / Registered: July 10, 2003 7:42 AM
                Re: modelling clay vs. marble
                January 7, 2004 9:46 AM      
                > the notions expressed in the article are independent
                > of programming language.
                Perhaps.
                No 'perhaps' - the article is about process.

                But practically speaking
                The topic is practice not theory - I was speaking about practice.

                the choice of language type (static or dynamic) can have a huge impact on the notion of code flexibility
                There are huge differences between programming languages within those 'language types' (statically checked / dynamically checked).

                Working with Haskell and ML is so different from working with C++

                Working with Erlang is so different from working with Ruby

                Experience with C++ doesn't generalize to 'statically checked languages'; experience with Ruby doesn't generalize to 'dynamically checked languages'.
    • Joe
       
      Posts: 15 / Nickname: jcheng / Registered: October 16, 2002 8:08 AM
      Re: Working the Program
      January 7, 2004 1:32 PM      
      Not all requirements changes are of the type where you "write 20 statements and change four". In my experience, there are many types of decisions that are going to be much cheaper to make earlier than later--regardless of your choice of process, methodology, programming language, or hairstyle.

      Globalization/internationalization
      Exception handling and logging policy
      "Branded" versions of apps or webapps
      Clustered deployment
      Client type (i.e. adding a Swing client to a webapp)

      IMO you'd be crazy not to plan for internationalization if there's even a 25% chance that a Japanese port is in your future--even if the only action you take is to pick a programming platform that supports Unicode. Yet the XP literature unequivocally states "Turn a blind eye towards future requirements and extra flexibility." http://www.extremeprogramming.org/rules/early.html

      It seems to me that this particular XP rule makes a lot of sense where you're talking about programming at a micro level--individual implementations of classes or algorithms, or even of a module/package--but less so when you're talking about high-level architecure or application-wide concerns.
    • Celia
       
      Posts: 6 / Nickname: redmore / Registered: June 24, 2003 0:14 PM
      re: Working the Program
      January 7, 2004 6:30 AM      
      > Not only can a change made later in a project be the same
      > price as one made early – if the product is well-designed,
      > it can be cheaper.

      A couple of people have challenged this notion, so let me see if I can do a better job of explaining myself – so the theory can either be confirmed or knocked out of consideration.

      Building a highrise (to use the original paradigm for software projects) is a serial activity. You dig the basement, add the lobby and all the floors, and put a roof on the top. Deciding later that the basement should have been bigger is the kind of change that generated the original idea that the later in the project the changes are made, the more expensive they are.

      Now consider a properly designed project -- where by proper, I simply mean that you know from day one whether you’re building a residential or office tower, for example. Construct the edifice with only those items that are expected to be unchangeable (basement, supports, elevator tower). Don’t, for example, build internal walls.

      Then new tenants can be quickly and easily moved in and out, by adding or removing internal walls and so on according to their specifications. Leaving the internal design to last makes changes cheaper.

      The important part, as Ward emphasized, is that you have to be clear early on what is fundamental (i.e. static) and what should be left sketchy or omitted (dynamic). The error BDUF projects used to make was in not differentiating these parts, but instead just creating a big pile of code.
      • Vincent
         
        Posts: 40 / Nickname: vincent / Registered: November 13, 2002 7:25 AM
        Re: re: Working the Program
        January 7, 2004 1:30 PM      
        > The important part, as Ward emphasized, is that you have
        > to be clear early on what is fundamental (i.e. static) and
        > what should be left sketchy or omitted (dynamic).

        In other words, you're saying...
        1) You cannot get away from doing design up front and predicting what changes are permitted and what aren't.
        2) There are limitations on what you can change later in the project.

        Unfortunately, both these points strongly contradict "Agile" theory which says that:
        1) Don't attempt to predict and design for possible later changes.
        2) Anything can be changed at anytime by anyone.

        Vince.
      • Isaac
         
        Posts: 51 / Nickname: igouy / Registered: July 10, 2003 7:42 AM
        Re: re: Working the Program
        January 7, 2004 10:01 AM      
        Construct the edifice with only those items that are expected to be unchangeable (basement, supports, elevator tower...

        That seems to be a "traditional" approach:

        "We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others." Parnas 1972

        http://www.acm.org/classics/may96/

        you have to be clear early on what is fundamental (i.e. static) and what should be left sketchy or omitted (dynamic).
        Yes, that's the problem! What's the solution?
        How do you know what will not change?
        • Celia
           
          Posts: 6 / Nickname: redmore / Registered: June 24, 2003 0:14 PM
          Re:Working the Program
          January 7, 2004 1:58 PM      
          How do you know what will not change?

          Unless you contend that any piece of software can morph into any other piece of software, then the initial design must include constraints. The trick is to make the initial constraints no more limiting than is absolutely necessary. Call it art or engineering, it’s part of the responsibility of the designer to make this call.

          Parnas was looking at this from the opposite end: decide what may change and encapsulate it. The problem, as you say, is that we can’t know what changes will be required. That is why there is an advantage in deciding what will is absolutely fundamental and encapsulating everything else.

          Is this pure semantics? I don’t think so. The software that has lasted longest has been that where the original designers displayed genius (or luck) in creating a kernel, which is fundamentally separate from its visible functions and which did no more than absolutely necessary to define it as a product.

          I’m certainly suggesting that it’s not possible (except fortuitously) to create software that is entirely malleable. One has to define a core product, which at some point in time will have to be discarded.
          • Kart
             
            Posts: 1 / Nickname: kartbart / Registered: January 7, 2004 10:20 AM
            Re: Re:Working the Program
            January 7, 2004 3:26 PM      
            "The software that has lasted longest has been that where the original designers displayed genius (or luck) in creating a kernel."

            Yes, and the CPU architecture has not evolved in any dramatic way either. It is ludicrous to compare CPU and kernel software that is generated using documentation that will make a phone book look like a joke to business software that is usually run by business owners who are clueless about s/w development.

            In my opinion there are two types of changes:
            a- Changes that are unforseen
            b- Changes due to misinterpretation.

            For (b), the designer failed to get the user a feel for the system as soon as possible, misread the requirement or some such "error" occured. For (a), it is usually due to circumstances that are beyond anyone's control. The pressure in (a) is because everyone has this notion that s/w is very flexible and hence anything can be changed. The reality is that software is dangerously flexible - can easily code the wrong thing and change is expensive.

            In most cases, change late in the game is costly not because you have more lines of code to change but because:
            - You have to update the requirements spec
            - You have to update the design document
            - You have to update the code
            - You have to update the unit tests
            - You have to update the test scripts
            - You have to typically run regression tests all over again
            - You have to re-educate your client
            - You have to update the user's guide
            - etc.
          • Isaac
             
            Posts: 51 / Nickname: igouy / Registered: July 10, 2003 7:42 AM
            Re: Re:Working the Program
            January 7, 2004 3:35 PM      
            [/i]Parnas was looking at this from the opposite end: decide what may change and encapsulate it ... there is an advantage in deciding what is absolutely fundamental and encapsulating everything else.<br>
            Is this pure semantics? I don’t think so.[/i]

            These seem to be duals: identify what we're unsure about and encapsulate it; indentify what we're sure about and encapsulate everything else.

            Seemed like the article suggested something quite different - we shouldn't worry about future changes (we don't guess right often enough). Instead design for the requirements that exist.
            • Celia
               
              Posts: 6 / Nickname: redmore / Registered: June 24, 2003 0:14 PM
              Re: Re:Working the Program
              January 7, 2004 4:03 PM      
              Seemed like the article suggested something quite different - we shouldn't worry about future changes

              Now we're decidedly into shades of grey -- and I don't want to guess (keep guessing?) what somebody else is thinking.

              This is probably where we should stop trying to create a rule to end all rules, and create different approaches depending on whether we're talking about some trivial one-off project clearly delineated by a particular customer, or something grander, expected to last much longer.

              Maimonides: In the beginning we must simplify the subject, thus unavoidably falsifying it, and later we must sophisticate away the falsely simple beginning.
              • Isaac
                 
                Posts: 51 / Nickname: igouy / Registered: July 10, 2003 7:42 AM
                Re: Re:Working the Program
                January 9, 2004 7:59 AM      
                create different approaches
                Looking at the problem and then putting together a process for that problem does seem like a smart thing to do.

                The idea of "blending process models" was explicit in "Strategies for Software Engineering" Martyn A. Ould 1990.

                AFAIK only Michael Jackson has tried to characterize problems (rather than solutions).

                http://www.ferg.org/pfa/
                • Isaac
                   
                  Posts: 51 / Nickname: igouy / Registered: July 10, 2003 7:42 AM
                  Re: Re:Working the Program
                  January 9, 2004 8:29 AM      
                  > AFAIK only Michael Jackson has tried to characterize
                  > problems (rather than solutions).
                  Better references:
                  http://dspace.dial.pipex.com/jacksonma/
    • Frank
       
      Posts: 135 / Nickname: fsommers / Registered: January 19, 2002 7:24 AM
      Re: Working the Program
      January 8, 2004 0:11 AM      
      I agree with Ward that a lot of what we consider great design is often a product of evolution, rather than someone sitting down and figuring everything out up-front. I used to start designing things by first defining the interface of some component, and then figuring out how the desired components interact via their interfaces. While I still think there is room for that sort of approach, I now tend to build things by creating small pieces of functionality, and then refactoring them according to some logical criteria that often emerges during the process. I suppose that that latter approach is similar to the molding of clay, or of marble, that Ward talks about.

      While at first I didn't like that latter approach on intellectual grounds (I considered it "hacking" vs. architecting), the "molding the clay" approach proves more effective than the "top-down" architecting approach in many situations. Interestingly, and counter to my initial intuition, the upfront architecting approach seems to work well on smaller projects. As the size of the project grows, and with it the number of requirements and classes, having a clear-cut, initially well thought-out architecture often becomes a burden. At some point, the decision must be made to either completely revise the up-front design (which makes that design less up-front), or just simply go without a grand design.

      In lieu of a grand design, what I often find useful, though, are idioms. These are not really patterns in the fancy sense of the word, but just small habitual ways of doing things. If an idiom works well, reusing that idiom over and over again helps give the program a sense of unity, since even a large program can often be reduced to a handful of idioms. The set of idioms give the program a sense of Gestalt. Also, developers reading code based on idioms have to understand only those handful of idioms to start contributing to the program.
      • Gregg
         
        Posts: 28 / Nickname: greggwon / Registered: April 6, 2003 1:36 PM
        Re: Working the Program
        January 12, 2004 1:44 PM      
        > I agree with Ward that a lot of what we consider great
        > design is often a product of evolution, rather than
        > someone sitting down and figuring everything out up-front.
        ...
        > In lieu of a grand design, what I often find useful,
        > though, are idioms.

        This is exactly what I use. I have particular patterns of overall system architecture that I utilize in particular situations.

        I often cast these patterns into classes at some point and provide enough augmentation via abstract interfaces to make them a general solution to a particular class or problem.

        Frank, you sparked me to ramble some more about languages over in my blog...

        http://www.artima.com/weblogs/viewpost.jsp?thread=28518
      • Isaac
         
        Posts: 51 / Nickname: igouy / Registered: July 10, 2003 7:42 AM
        Re: Working the Program
        January 13, 2004 1:53 PM      
        a lot of what we consider great design is often a product of evolution, rather than someone sitting down and figuring everything out up-front

        This makes upfront design sound like immaculate conception! Prototyping and iteration are commonplace in design activities.

        Why do we have to choose between everything upfront and nothing upfront?

        first defining the interface of some component, and then figuring out how the desired components interact via their interfaces... creating small pieces of functionality, and then refactoring them according to some logical criteria that often emerges during the process

        Isn't this top-down vs. bottom-up?
        What about middle-out edge-to-edge design?

        In "Software Requirements & Specifications" Michael Jackson points out that for inventing or designing new things top-down is the riskiest possible ordering of decisions. The largest decision, the subdivision of the whole problem, is taken first when nothing is yet known and everything remains to be discovered.

        (Dijkstra sidesteps this "I assume the programmer's genius matched to the difficulty of his problem and assume that he has arrived at a suitable subdivision of the task."!)

        Prototyping and iteration are commonplace in design activities - the interesting question is where are the cheap places in-the-process to do design iteration? When is design iteration waste?

        As the size of the project grows, and with it the number of requirements and classes, having a clear-cut, initially well thought-out architecture often becomes a burden
        How much is this due to lack of tool support?

        At some point, the decision must be made to either completely revise the up-front design (which makes that design less up-front)
        Wouldn't that be the (no less) up-front design of version 2?