Summary
NoMagic released MagicDraw UML 11, tightly integrating its design tool with Eclipse, and facilitating round-trip engineering between design and code.
Advertisement
A key agile development tenet is to implement a large system in tiny increments, coding up small pieces of tests and functionality one at a time until the system provides all the required use-cases. Such incrementalism seems to fly in the face of the big up-front design school that advocates doing much design and modeling before starting on any code. The Unified Modeling Language (UML) seems to epitomize to many the big up-front design philosophy, where a project might start with months-long object modeling before developers can at last fire up their code editors.
In reality, both the code-only and the long upfront modeling view of development are extremes, according to Gary Duncanson, president of No Magic, a company whose popular UML modeling tool, MagicDraw, aims at a middle ground. Such middle-ground is roundtrip engineering, the ability to go back and forth between code and a visual object, data, or business process model. As the graphical model changes, a round trip-capable tool generates code reflecting those changes. And the graphical model is updated in tandem with changes made to the code.
To coding-only agile developers, NoMagic's Duncanson wishes "Good luck. We would call this [hybrid approach] a more agile process. We [have been] using this for 11 years, and found that it works very well. [Coding-only] XP is not a good idea: [it's] good for prototyping, but if you want to produce more than a 1.0 release and keep cost down over the process, a pictures is worth a thousand words."
MagicDraw provides not only UML diagramming and round-trip capabilities, but also business process modeling and database design as well. The most recent version integrates with Eclipse, providing round-trip coding and design capabilities inside the Eclipse IDE. MagicDraw is used by solo developers working on small projects as well as in large team settings where a project might involve several hundred database tables and thousands of objects, according to Duncan.
In a larger team environment, MagicDraw allows various project participants to share design artifacts—business analysts, developers, architects, database developers can share and update designs. MagicDraw provides a server component where team collaboration is required. Since developers often must work with legacy code, MagicDraw's reverse-engineering capability also helps quickly create baseline design documentation for projects that otherwise lack documentation.
I maintain what I regard as a healthy skepticism about vendor's claims. That skepticism tends to solidify when they lie about alternative approaches.
The lie in this case is that XP isn't good for anything beyond getting a 1.0 release out the door.
Making a product which can be enhanced indefinitely is hard. It's not impossible; there are products that have been enhanced for decades without needing a complete redesign and without falling into disasterous code rot.
Part of XP is a toolbag of techniques for doing exactly that. There is no real excuse for an XP team producing a product that can't be enhanced indefinitely. This is one of XP's primary selling points: you can use it to create a product where the cost of adding features is pretty much level regardless of when you add the feature. In fact, the XP process almost guarantees failure quite quickly if you don't apply those techniques.
MDA brings nothing whatsoever to this discussion. The reason is quite simple: to make a product that can be enhanced indefinitely requires continuous redesign, which in turn requires that the design documentation is continuously redone. In XP, the low level design documentation is embodied in the code and unit tests and the high level documentation is a set of diagrams (frequently hand drawn) kept on the team Wiki (or equivalent).
In a case tool approach (and that's what MDA is all about when you get past the hype: a warmed over case tool approach) the design documents still have to change, and at the same rate. Generating the code from the tool simply moves the effort from one place to another. And that's true even if the tool generates final code that never has to be touched. If the tool generates prototype code that then has to be enhanced by hand (and that's what round trip suggests to me) it's going to be worse.
The guts of the XP approach is the very tight coupling between writing tests and writing code that's embodied in TDD. I have yet to see an MDA approach which can be used in this fashion. That doesn't mean it doesn't exist; just that I haven't seen it, and after the Microsoft fiasco in claiming that they support TDD when their team didn't have clue one about it, I'm going to be skeptical of claims until I see it or hear reports from a credible source that they've seen it.
> Part of XP is a toolbag of techniques for doing exactly > that. There is no real excuse for an XP team producing a > product that can't be enhanced indefinitely. This is one > of XP's primary selling points: you can use it to create a > product where the cost of adding features is pretty much > level regardless of when you add the feature. > I'd like to challenge this statement. Kent Beck sent me XP Explained for review back in 1998 I think, before it was published. I unfortunately didn't have time to give him much useful feedback. I mainly asked him one question. I asked what evidence does he have that these techniques will really flatten the cost of change curve. He replied that we'll just have to take his word for it. That told me XP was a what if kind of proposition, which was fine. I think that what if has done the industry a lot of good on the whole. Many years later, however, I'd like to re-ask the same question. What evidence do you have that the cost of change curve stays flat if you use the XP techniques?
What makes more sense to me is the take Luke Hohmann has on the cost of change curve: it is bumpy. It bumps up with new requirements that weren't forseen. And the flattening of the curve happens when you get architectural pieces in place. For example, I've been talking recently about DSLs we created earlier this year at Artima. In the future, I believe those DSLs will help us move very fast making changes that they help us make, which is the kind of changes we forsee given our current requirements. When those requirements change, I expect the cost of change will bump up again.
Luke's discussion of a bumpy cost of change is here:
What I'd like to understand is now that the community has had many years of experience with XP, what is the actual reality about the way XP practices influence the cost of change. What has been your actual experience with regards to XP and cost of change?
>What has been your actual experience with regards > to XP and cost of change?
I have been involved in XP projects and non XP projects. And I have seen XP practices reducing the cost of change, though not making it "flat". There are few things that make the change curve exponential. 1. Not thinking of testing while developing the software. XP's enforces Test Driven iterative development. This is the single important practices that affects change curve. With Test Driven Development, code is generally written with testing in mind. Evety piece of code has tests around it. Michael Feather calls this as "software vise". With this type of code, its really very confortable to make changes.
2. Developers are not domain aware. This is very serious fact that affects many software projects. XP's encourages direct communication between software developers and business experts. This helps making the developer domain aware and think in terms of domain. I have seen many projects where computer science abstractions are prominent in the code, but domain abstractions are scattered. Domain aware developers definitely write code that is in line with domain. This helps in maintaining/changing the code.
I have seen above two points repeated in at least half a dozen projects. XP practices (or Agile practices in general) definitely help a lot in developing software that is easier to change.
> I maintain what I regard as a healthy skepticism about > vendor's claims.
I agree, but you should keep in mind that XP is a product too. One that generates millions of dollars a year in books, training etc.
Software methodologies have become a big business and they have adopted the same marketing tricks that sell soap and soda.
XP, like CMM, has little or no evidence supporting its claims. When you ask the difficult questions, the answer you get from proponents is little better than "Things go better with Coke".
> I have been involved in XP projects and non XP projects. > And I have seen XP practices reducing the cost of change, > though not making it "flat". There are few things that > make the change curve exponential. > 1. Not thinking of testing while developing the software. > XP's enforces Test Driven iterative development. This > This is the single important practices that affects change > curve. With Test Driven Development, code is generally > written with testing in mind. Evety piece of code has > tests around it. Michael Feather calls this as "software > vise". With this type of code, its really very confortable > to make changes. > I'm curious exactly how a lack of tests makes the cost of change curve go up, and how the presence of tests help flatten it. What exactly is the mechanism? I can understand that you would be more comfortable making changes with tests, because they make you feel like you have a safety net under you. But I don't see how that would reduce the cost of change, just perhaps the fear of it. I guess there could be some reduction in cost by the tests pointing out all the broken places, whereas if the tests weren't there I'd have to spend a lot more time chasing them down? It doesn't seem like that would have an exponential difference in cost.
The other thing that TDD tends to do is help you create clean designs, and I do believe a better design would have a lower cost of change than a worse one. But TDD isn't the only way to get a good design, and I don't think the best design for today's requirements will help reduce the cost of change when the requirements change drastically tomorrow.
> 2. Developers are not domain aware. > This is very serious fact that affects many software > re projects. XP's encourages direct communication between > software developers and business experts. This helps > making the developer domain aware and think in terms of > domain. I have seen many projects where computer science > abstractions are prominent in the code, but domain > abstractions are scattered. Domain aware developers > definitely write code that is in line with domain. This > helps in maintaining/changing the code. > > I have seen above two points repeated in at least half a > dozen projects. XP practices (or Agile practices in > general) definitely help a lot in developing software that > is easier to change. > I do believe that less domain aware you are, the less your design will likely be able to accomodate the kinds of change common in your domain. I think the most important ingredient of a good design is domain experience. So this kind of rings true to me, because until the requirements change drastically, you'll be faced with the kinds of change that are expected in the domain. And if you know the domain, you can architect your system to accomodate that kind of change easily and cheaply. I'm curious to what extent you need to have your own domain experience versus the domain experience of business experts. But absent the first, or even with it, I would expect having a "customer" to whom you can ask questions as you go would be much more helpful than a specification.
Anyway, to me domain experience is what's important in creating a good architecture/design, which leads to a lower cost of change--until the requirements change drastically. Is your experience that having a XP-like close relationship with business experts can help compensate for a lack of domain experience?
> Anyway, to me domain experience is what's important in > creating a good architecture/design, which leads to a > lower cost of change--until the requirements change > drastically. Is your experience that having a XP-like > close relationship with business experts can help > compensate for a lack of domain experience?
While XP encourages communication with a customer, it's important to realize that a "customer" is not necessarily a domain expert. By a domain expert, I don't just mean someone with knowledge of the business an application addresses, but someone with experience of how software must function in the context of the business requirements.
I worked once with a casting agency on developing a movie casting Web app. The folks I interacted with each had several decades of expertise in the casting business - but none in building a casting Web site. For instance, how would actors upload their photos - in what format, what sizes, and what resolutions? What software would actors use to edit their photos to the required format? These, and dozens of others, were crucial questions for the success of this app, but these domain experts had no knowledge of such issues. We spent a lot of time exploring these avenues, going down a few blind alleys in the process. At the end, we all became domain experts - in the domain of a casting Web app.
So in this example, interacting with experts in the business, i.e., the "customer," didn't really help flatten the development cost curve, or even create a clean design.
So I don't think there is real substitute for domain expertise in the sense I was just alluding to.
> But I don't see how that would reduce the cost of change, just perhaps the fear of it. I guess there could be some reduction in cost > by the tests pointing out all the broken places, whereas > if the tests weren't there I'd have to spend a lot more > time chasing them down? It doesn't seem like that would > have an exponential difference in cost. > I don't think the best > design for today's requirements will help reduce the cost > of change when the requirements change drastically > tomorrow. > In one of my projects, after around one year of development we decided to rewrite the middle tier component completely.(For performance and some other critical change requests) Because we had a comprehensive suite of tests (around 1500 functional tests), we could move ahead confidently and rewrote the component from scratch in one iteration (1 month). Some of the original designers/developers were not on the team then, but still having tests around the component helped us to move ahead confidently. Without the tests around I just can not think of touching that code. Dont we see it in many projects that people are just afraid to touch the code after couple of years? More so if original authors of the code are not around. Tests not only act as safety net, but are the best documentation for the code. They force you to write "intentional code". Its easier to change the code with tests around it, even when the original author is not around. So I guess TDD affects cost of change positively.
> Is your experience that having a XP-like > close relationship with business experts can help > compensate for a lack of domain experience? Definitely. With domain experts around you, talking to you when you are developing software helps you a lot. It helps even the domain experts to be more clear on what they really want. When you show working software to the customer constantly as you are building it, they can visualise their requirements better, or at least can communicate better with the development team. There is less impedence mismatch between the two teams.
>Tests not only act as safety net, but are the best > documentation for the code.
But aren't you using an agile method which discourages traditional documentation? When you have no documentation, the only things left to act as documentation are tests and code. That hardly proves the tests are the "best" documentation in the more general case.
> They force you to write > "intentional code". Its easier to change the code with > tests around it, even when the original author is not > around. So I guess TDD affects cost of change positively. >
You're making a lot of unsupported claims here. Why do you believe that TDD is better than writing the test after the code? If a unit passes its tests, what difference does it make whether the test was written first? If you were presented with two sets of code, one created using TDD and the other created traditionaly with a unit test, would you be able to tell which is which? If so, what characteristics of the code would tip you off?
> In one of my projects, after around one year of > development we decided to rewrite the middle tier > component completely.(For performance and some other > critical change requests) Because we had a comprehensive > suite of tests (around 1500 functional tests), we could > move ahead confidently and rewrote the component from > scratch in one iteration (1 month). Some of the original > designers/developers were not on the team then, but still > having tests around the component helped us to move ahead > confidently. Without the tests around I just can not think > of touching that code. Dont we see it in many projects > that people are just afraid to touch the code after couple > of years? More so if original authors of the code are not > around. Tests not only act as safety net, but are the best > documentation for the code. They force you to write > "intentional code". Its easier to change the code with > tests around it, even when the original author is not > around. So I guess TDD affects cost of change positively. > Why do you say tests are the best documentation for the code? Do you mean that when I need help understanding the code, because the code itself is more confusing, I can go read the test code for insight? Can I get a nice overview of the software from looking at the details of the tests?
The one thing I think we can say about the tests is that they are kept up to date better than written documentation, but I wouldn't call them the best documentation. They only really tell me when I broke something. At that point I could look at the broken test and understand better what the breakage is, what the designers original intent was. That's a kind of documentation, but I can't get an overview of the system. It is a pain to try and discern the semantics of types and methods by looking at all the tests that call into those types and methods. I think it is much better to read an overview doc to get the high level view, and JavaDoc to get at semantics of types and methods.
I'm curious also what you mean by intentional code. Could you elaborate?
Lastly, I still don't quite see the connection between having the tests and the cost of change. I see the connection between having the tests and the courage to make changes, but not the cost of change. How does courage to change translate into lower cost of change?
> That hardly proves the tests are the "best" > documentation in the more general case.
I am just saying that in my experience, having tests around while going through the code is much more useful than reading any other form of documentation, in particular, when you have to make changes to the code.
> You're making a lot of unsupported claims here. Why do > you believe that TDD is better than writing the test > after the code?
Writing unit tests after writing code is also useful. But TDD approach can be considered as a better habit. For an average developer, writing tests and then code helps in practice. As Bjarne Stroustrup says, first principle of better design is "You should be very clear about what you are trying to build". TDD naturally helps you in understanding the problem you are trying to solve.
But at the same time the truth is that "There are no cookbook methods that can replace intelligence, good test and experience". Trying to find better habits for effective software development from the methodologies surrounded by much marketing hype is a difficult task. You can find out what things are good or bad for better software development only through experience. Some practices just help you out a little more.
> Writing unit tests after writing code is also useful. But > TDD approach can be considered as a better habit. For an > average developer, writing tests and then code helps in > practice. As Bjarne Stroustrup says, first principle of > better design is "You should be very clear about what you > are trying to build". TDD naturally helps you in > understanding the problem you are trying to solve.
I don't see how the TDD approach provides any unique help in understanding what you are trying to build. In fact, the iterative nature of TDD suggests that the programmer doesn't have a clear idea of what he is trying to build. He just keeps fiddling with the code until it passes all the tests.
Although this discussion transformed pretty much into XP / non-XP battle, I would like to add some emphasis on modeling, which is important for any approach that you are taking in developing software.
Code is not effective for communication with customer. You need to have a model of concepts, entities and relationships that are in the domain for which you develop application. You need to model requirements, architectural decisions, test cases. Agile people prefer doing it on whiteboards or paper. However, this does not allow to create a repository of artifacts that are reusable in the whole software lifecycle and integrate with each other. MagicDraw is UML tool, which allows you to do it. You can start at highest abstraction level and later transform these concepts into lower levels and finally into code adding technology and design knowledge and growing the level of details. And use these artifacts for different type of documentation as well. Sure, ballance should be kept - you cannot design the whole application without writing any code (unless you are using MDA tools that are still evolving). You need to iterate and there is always some feedback coming back and forth between different abstraction layers.
And yes, some XP practices like TDD are really useful.
An answer to question what difference it makes to write unit test before code as opposed to vice versa is the following:
1) You think about the classes from usage point of view first as opposed to thinking about implementation first. This helps programmers to create code with better design.
2) You implement only those features that are necessary, i.e. those that make the test pass.
However, these useful practices are not excuse for not keeping documentation or models that in many cases are the core of it.
If you look at modeling you will find that many modelers work in the same TDD-like manner: we create sequence diagrams for modeling scenarios (compare to unit test) and when modeling these scenarios we add operations to classes (compare to implementing methods that are necessary for running unit test). It's just the same practice only in different level of abstraction.
And we need different levels of abstraction just as projects need vision statement as well as user requirements and protypes.
This could be grammarware or UML diagrams, but remember that picture is worth a thousand words. However, you cannot draw a picture for any thousand of words. Thus, we advocate a ballanced approach, which uses both models and source code and is neither code-only XP nor big upfront design apprach.
The bottom line:
1) Source code only approach is not effective for any larger project;
2) Models are an effective means to communicate requirements, design, and even code or test structure;
3) Models allow you to work in different level of abstraction and different level of details. UML tools allow to store models and reuse modeling artifacts;
4) MagicDraw code engineering and model visualization tools together with Eclipse integration are a useful tool for developers for visualizing, documenting, analysing, and refactoring at source code level, which can be integrated with artifacts from different abstraction levels.
> An answer to question what difference it makes to write > unit test before code as opposed to vice versa is the > following: > > 1) You think about the classes from usage point of view > first as opposed to thinking about implementation first. > This helps programmers to create code with better design. > > 2) You implement only those features that are necessary, > i.e. those that make the test pass.
1) Using traditional methods you think about requirements first, not implementation. TDD offers no unique advantage in thinking about design.
2) TDD doesn't guarantee that you only implement those features that make the test pass. On any iteration of the process the programmer could add more functionality than passing the test would require.
A unit test (no matter when it is written) can't distinguish between satisfiying the minimum set of requirements vs. a superset of those requirements.
In any case, if there is a general problem with extra functionality it's usually not at the method implementation level. If a programmer wants to add functionality that the customer didn't ask for, he'll create new methods or classes and then use traditional methods or TDD to test it like any other part of the system.
This is really a management problem. The only way that management can guarantee that no extra functionality is included is to inspect the code or run the application. You can't rely on a programmer-based process because the programmer is the problem in this scenario.
> Why do you say tests are the best documentation for the > code? Do you mean that when I need help understanding the > code, because the code itself is more confusing, I can go > read the test code for insight? Can I get a nice overview > of the software from looking at the details of the tests? Overview doc and Javadoc are very useful in getting a higher level overview. And we also have to have UML diagrams etc. But when you have to make a change to existing code, tests are better documentation. You can see the dynamics, the code in action, by running the tests. You can write your own tests in minutes and see how the code behaves.
> I'm curious also what you mean by intentional code. Could > you elaborate? Tests help you better understand what are the exact responsibilities of the module. Once your understanding grows, you can factor out resposibilities better. Name the things better. This is true even if you write tests after you code. When you start writing tests, you have to refactor the module, factor our other modules from it, rename things, in effect making the code more intentional. The code reflects whats in your mind. When you are clear about what the code should do, you write clearer code.
> Lastly, I still don't quite see the connection between > having the tests and the cost of change. I see the > connection between having the tests and the courage to > make changes, but not the cost of change. How does >courage to change translate into lower cost of change?
Doesnt courage to change directly affects time you spend in changing the code? When you have courage to change the module, you can even make drastic changes. Many times these drastic changes make the code cleaner and easier to deal with. You have to spend less time making the change. You can align your design in the direction of change. If you dont have courage to change the code, it is almost sure that you will make a local fix and dont even bother to refactor the overall design. Over time, such a code with lot of local fixes becomes impossible to maintain. As a side effect of this, I have seen team members getting bored and frustrated with the code, asking for release from the project. I have seen many developers asking, "Am I just going to make small fixes here and there? I want a better opportunity." Or "This code is too ugly, I feel uneasy when I touch the code. I will be happy if this project goes on hold". This is definitely a bad thing to have such a feeling in the development team. But the fact is, we have many many software projects facing this situation. (I work in india and many offshore projects especially suffer with the scenario described above. I dont have exact figure, but have seen many project going on "hold" because of missed timelines and confusion about exact requirements)
With courage to change the code, you have freedom to improve the design daily, making your job more interesting. You dont get frustrated with changing requirements. You look at requirement change as an opportunity to improve your design. This change in the mindset has positive impact on the development teams. All of this in effect definitely makes cost of change going down. There is other social impact of having tests. Developers are stress free. At the end of the day you know exactly in what state your code is. If you have broken anything? Especially in offshore developent projects, when you commit your code at the end of the day, many times your are not very sure if your changes have broken anything. Have you taken care of all the things communicated to you by your onsite coordinator? When you are not sure of this, you tend to spend extra time in the office, or even when you leave office, have an uneasy feeling that you might get a call in the middle of night asking you about certain change. This affects your personal/social life slowly. This issue is very important to the long term benefit of any project/organization. When you commicate in the form of tests. There is less chance of confusion. You are more confident about your changes and live a stress free life. The social impact of this practice is very important and definitely affects the cost of change in long run.