Software development involves constantly making return on investment decisions. Investing extra time to achieve higher quality software can yield dividends far into the future, or can cause you to miss a market window in the near term. How do you make the tradeoff between software quality and development speed?
The first few years of trying to build a community site at Artima.com, I wrote some of the worst code of my career. It is nowhere near the worst code I've seen, but it is among the worst I've written. Why? One reason is that among all the software jobs I've had, I've felt the most intense pressure to get software done quickly at Artima. I knew I was hacking, but I felt that the right tradeoff was to get the functionality out the door as soon as possible, even at the expense of code quality. Part of the pressure came from the fact that at previous jobs, I was pretty much just a programmer, though sometimes also a project manager. But at Artima, I've had many more hats to wear, so my programming time has been limited. Nevertheless, much of the pressure came from perceived competitive threats. I perceived that I needed to hit certain market windows before they vanished, and that made me feel a lot of pressure to move fast.
Looking back, I think hacking was the right decision. Most of the quality of service problems I've troubled my users with have little to do with the hacks I did. The hacked software worked quite well enough to meet the business need. Getting features out the door fast allowed me to build up the audience, and getting real feedback from them has taught me a great deal about the business domain of online publishing and communities. Now I have several years of real experience to draw from, and I'm to a great extent starting over with the software. I now have a much clearer picture of what I want to build and how I want to build it.
A few months ago, Frank Sommers and I started work on what we're calling the "next generation" of Artima. This time around, I'm trying hard to ensure our software is well-designed and crafted. The reason is that at this juncture, I want to build an architecture that will enable us to scale our business. I perceive a long-term return on investment for taking great care in how we shape that architecture.
I find that I am usually quite capable of producing good enough software by hacking. (By hacking, I mean cutting quality corners, such as hastily copying and pasting code.) The trouble with hacking is not that the code you hack together isn't good enough to meet the immediate need. Rather, the trouble is that long term the technique doesn't scale. As you hack together more and more pieces, eventually the whole thing wants to fall over. You have to make extra efforts to prop it up so it doesn't fall over, and that slows you down. By contrast, taking the time to build a solid architecture up front slows you down initially, but can help you move much faster later.
I'm currently making the bet that, so long as we don't completely miss the market windows, putting in place a solid architecture will give us a competitive edge that could have a big payback in the future. But I'm nervous, because I know of competitors who are working in the same directions. And they may be hacking. For the time being, though, I'm trying to keep a steady hand on the wheel as we take the longer, higher quality route.
How do you decide when to cut corners, and when not? Looking back, how often did you regret not moving faster with more of a hack? How often did you regret that you didn't move slower and do higher-quality work? And most importantly, how have you successfully achieved quality software at a fast pace in the past? What I'd really like to do is generate high quality software as fast as any hack. Have you been able to do that? How did you accomplish it?
When I took my first job at a "real" company (i.e. not a startup) I decided I was going to write a great CMS system which was well designed. It was going to be elegant and flexible; it was going to be great. Suffice to say that the project never got anywhere and I moved on to go work at a startup (and to this day I have not worked for another "real" company).
Over the past few years I've gone back and forth between developing for now and designing for later. Between the two I've had more success developing for now and fewer successes designing for later. The primary reason is because I've tried to design without really understanding what needed to be designed. In the end that almost always leads to failure.
For the moment I try to build first using a mix of scripting languages and Java. I'll be the first to admit that I don't write nearly enough tests. I want to write more tests but unfortunately I feel that I am more productive by fixing problems rather than trying to stop them from happening in the first place. Thus for the moment I don't write a lot of test cases. I would like to though, and this is where my choice of tools and languages comes in.
I really like Python. I like it because it's terse, but another thing I really like about it is that it is very easy and convenient to include testing code in the same file as the actual class which is being tested. This is in part because of how Python source code is structured (i.e. multiple class declarations and function declarations in a single source) versus Java (one class per source) and because the terseness means that unit testing can be accomplished in fewer lines than with Java. So in the long run I see Python as being my means for writing quality software fast. Am I there today? Unfortunately, no. I'm far to comfortable with Java to just switch to Python. However as the Python standard libraries grow, the documentation gets better, and the usage grows, I believe that making the switch will be easier to do.
When I was coding professionally, I invariably invested in long-term designing, bullet-proofing, and "building for the future". Invariably, it was a mistake to do so.
I can't say I've learned my lesson. I *still* think that way. Can't seem to help myself. But I can tell you that it is almost without exception very wrong to do so -- most of the companies I built that software for have long since gone out of business. Generally, before doing so, they changed their business model so the software was no longer appropriate, and they wound up using something else, instead--generally something they purchased from a vendor who specialized in that area, rather than spending the time and energy to make extensions.
That said, the one area where I *don't* think attention to detail is a waste of time is in the user interface. If you don't get that right, none of the rest of it is worth a fiddle. But all of the time I invested in carefully crafting an "architecture for the future" was invariably time and effort that went down the drain. (When I tried to start a company, the extra time was the death knell. We might have had a chance, had we hit the market a year earlier.)
So by all means, learn from my mistake. Besides, these days I'm totally delighted by the combination of unit testing and refactoring. Unit testing keeps my bugs shallow, so they're easy to fix. Refactoring lets me build elegance into the architecture over time, as needed to support new features.
Gone are the days when my beautiful design was crashed by a pile of bugs, until I lost all confidence in the architecure. That was always followed by a long process of testing confidence into the product. It worked, after a fashion. As the bug rate dropped, confidence gradually grew. But the patches and fixes that kept getting layered on took the original clean design and turned into an ugly hodge-podge of hacks, so that any sense of elegance or beauty was lost.
For me, unit testing put the fun back into programming, and refactoring (plus patterns) have created the capacity for an organic convergence on elegance. The old waterfall method is dead. Long may it lie in its watery grave.
> How do you decide when to cut corners, and when > not? Looking back, how often did you regret not moving > faster with more of a hack? How often did you regret that > you didn't move slower and do higher-quality work? And > most importantly, how have you successfully achieved > quality software at a fast pace in the past? What I'd > really like to do is generate high quality software as > fast as any hack. Have you been able to do that? How did > you accomplish it?
I like Alistair Cockburn's statement in his book "Surviving Object-Oriented Projects":
"- Increments let you fix or improve the development process. - Iterations let you fix or improve the quality of the system."
Time-to-market, quality, and features are conflicting goals. Even the best hack is going to deliver better quality in more time. You have to choose what the right balance is between those goals and the right balance depends from project to project (what is good enough, what is soon enough?). But, in my experience, the most common problem I have seen is that increments were not used to re-evaluate that balance and how the balance is achieved, and to adjust the process accordingly.
From what you are describing, Bill, you are using iterations to improve the quality of your project. But have you used increments to re-evaluate your process and to re-evaluate what corners to cut and when to cut them, based on the previous increments? To be honest, I still have to work on a project that is run this way so I am not talking from personal experience, but I do believe that using increments this way is the right solution for achieving the correct balance between quality and time-to-market.
BTW though, I discovered Artima only recently and so far I am impressed. Maybe you are just too hard on yourself.
Quite a few years ago I used to do all that heavy design, bulletproofing, etc., too, and I agree that it was a waste of time.
I've since swiched to XP and, particularly due to the test-first developmnt philosophy, have seen the following things happen.
Developing entirely new code is slower by a factor of 2-5, depending on how much new testing infrastructure you need to write for the feature. If it's something where the general framework already exists (e.g., adding an entirely new web page to a site that already has tested web pages), you'll be in the 1.5-2.5 range. If it's something brand new, such as adding a database to a website that has never connected with one, you'll be at the higher end of that range.
Changing existing code is 3-20 times faster, depending on how safe you want to be if you didn't have the tests, and how sweeping the changes are. I do far more extensive rewriting of code (heck, rototilling) than I used to because I know the tests will catch the breakage. I do experimental rewrites that I could never have done before because I had no way of knowing if I'd broken the system or not.
That aside, I find that a the majority of my feature requests tend to be changes to existing code rather that implementations of entirely new features.
As far as quality goes, we all know that in areas that change quickly we need high-quality, well-designed code in order to be able to make our changes. We also all know that in an unchanging piece of code, the quality doesn't matter; you're never touching it so it's not going to break.
I used to guess at which was which, wasting time on areas that turned out not to change, and getting nailed maintaining code that everybody swore we'd throw out after a week. Now, due to the ability to rewrite so very quickly and safely, I do "just in time" code quality. I write a basic version of something that does the minimum job, and if I am later asked to extend it, I rewrite it "properly" before extending it. (Well, "properly" in that it's now just barely good enough to handle the extension I want to add.)
So I'd say, overall, you can't write quality code quickly, if that's what this discussion is about. But you don't need quality code for quality software; you need quality tests. And if you've got the test coverage, rototilling code from "not quality" to "quality" is a far smaller exercise than it would otherwise be, and can also be done incrementally, as needed, rather than in a big, time-consuming chunk.
I've written a lot of little apps, and architected and wrote large part of a medium size system (~150KLOC). I think the smaller apps came together faster, and provided a lot more value overall. The big system grew over time, and over time I got more and more careful about how I did things. The first piece was done *very* quickly, and defined a lot of what came to be later, both good and bad.
After 7 years, the system was going to be upgraded in a major way (Win16 to Win32). I ended up spending about 18 months carefully designing an architecture and building a framework that was to last another 10 years (at least, I hoped). We got about 30% of the system built, the company merged, and the project was cancelled. I learned a lot, but boy was that sad and frustrating.
My feeling is that you need to know your tools. If you don't know your tools, you can easily paint yourself into a corner. I specialize in the tools I use; I don't try to be pretty good with a lot of things, I try to be *really* good with a few.
Keep things as low-tech as you can; the less variety of "technologies" the better.
To answer your question, yes you can. How do you know when your working quickly and effectively versus just plain hacking? Experience. That's all there is to it. You weigh the risks and you follow your gut. I've made mistakes, and had success, in both directions. I haven't seen an editor yet that popped up a tooltip saying "Hack!" or "Gold plating!" when I write code, so I have to trust the voice of experience.
> Over the past few years I've gone back and forth between > developing for now and designing for later. Between the > two I've had more success developing for now and fewer > successes designing for later. The primary reason is > because I've tried to design without really understanding > what needed to be designed. In the end that almost always > leads to failure. > When I interviewed Luke Hohmann, he impressed upon me the importance of domain knowledge. In the early days of working on Artima, I really didn't have sufficient knowledge of the domain to know how to architect the software well. My experience wasn't just lacking in online publications/communities, but also in enterprise web apps in general. Now that I have gained more of that knowledge through experience, I feel far more confident in my ability to create a good design in this domain. You need more than good OO skills and good design taste to create a good design. You really need to understand the contexty in which you are operating.
Here's Luke Hohmann on the importance of domain knowledge:
> From what you are describing, Bill, you are using > iterations to improve the quality of your project. But > have you used increments to re-evaluate your process and > to re-evaluate what corners to cut and when to cut them, > based on the previous increments? To be honest, I still > have to work on a project that is run this way so I am not > talking from personal experience, but I do believe that > using increments this way is the right solution for > achieving the correct balance between quality and > time-to-market. > Good question. I wasn't sure what Alistair meant by increments, so I went hunting and found the text here:
> BTW though, I discovered Artima only recently and so far I > am impressed. Maybe you are just too hard on yourself.
Thanks. The main places that are "bad" are JSPs in which a lot of business logic is mixed with presentation, and APIs that are not at all tested (simply because I was in a hurry). But it does I think illustrate that you can have a decent user experience with mediocre code quality. The real problems I have faced, which continue to cause users to suffer, turned out to be scalability problems that had little to do with the previously described warts.
I think my experience also illustrates how important an architecture can be to quality, how much the architecture leads you in certain directions. I didn't write the first 20 or so JSPs at Artima, I licensed them from Jive Software. Jive is good software and has served Artima well, but version 2.1 came with very ugly JSPs crammed full of Java and HTML. When it came time for me to add weblogs to Artima, I felt a lot of time pressure to get them out the door. At that point, I had enough experience to know I wanted some kind of MVC web architecture, but which one? To actually fix this problem, I would need to study the different MVC frameworks out there, such as Struts and WebWork. (Jive eventually replaced their ugly JSPs using WebWork.) Or maybe I'd decide to roll my own MVC framework. But such a decision would take time.
Then once I made a choice of an MVC framework, I'd have to figure out how to integrate it with the JSPs. Would I need to rewrite the JSPs? Possibly. But if not, then I'd at least have to figure out how to integrate the old with the new. And finally, I'd have to implement weblogs using the new MVC framework.
I didn't feel I had time to do all that. I needed to get weblogs out the door, so I just made a few more ugly JSPs and got weblogs out the door in three days. The next time I needed to add a feature, I went through the same decision making process. Next time, same thing, over and over until I had a hundred ugly JSPs. Each time I felt a hack was the right choice instead of fixing the problem, and in hindsight I still think it was the right choice. But you can see how the the "bad" architecture propagated itself. What I'm trying to do now is pay close attention to quality while I'm building a new architecture, so that the new architecture will propagate good design.
See, I would chose whatever framework (or no framework at all) that seemed to work for the current need and didn't have any problems that obviously made it completely unsuitable for future use. I'd also write a set of test cases that covered the functionality I needed. (This would probably consist of extensive unit tests for the model, which is easy to test anyway, some basic functional tests to make sure that the pages are actually being displayed more or less correctly, and unit tests for view and controller stuff if it seemed necessary.)
Then, when the next thing comes along, if it fits the current system, I just add it. If it doesn't in some way, I've got two options. If it seems I can tweak the current system to make the new feature easy to add, I do so, and do a quick pass through afterwards to clean up any obvious messes left after that. If it seems too different, I might just write it afresh, get it working, and then afterwords hack away on the pair of features until they're both using a common framework. Sometimes you've discovered a new framework so much better that you just re-implement the old feature in the new framework. This usually isn't so hard, because you've still got all of your tests. Or sometimes you just can't tell which way is better, and you leave both separate systems in place until you get another feature that will guide you towards a better way of doing things.
First of all: I find this discussion really interesting!
A lot of the posts mirror my own experience and/or confirm my gut feeling.
What it boils down to for me is this: - Hacking can produce good enough code for now, very quickly - Hacked software with bad quality slows you down sooner or later, when you want to extend it - Creating good quality software always takes more time - Better Quality pays of in the (not so) long run
So what should we do? I found some very good advice in the posts above: 1) Asses what quality-level vs. speed is appropriate for your current situation 2) Quality on demand: You don't necessarily need perfect quality everywhere, you can improve via refactoring 3) Don't design for the future!
I think the agile movement is so popular, because it matches very well with these experiences. They lean towards good quality software that does only what is needed now. They don't like hacks and discurage them, which I think is correct most of the time. Be carefull when you hack something, you are taking a loan that you have to pay interest in later...
I agree with a lot of things that have been said here and I don't want to repeat them. Just so much: I also found myself more often than not on the side of those regretting having put too much time into upfront design, quality assurance, and refactoring parts that already worked just because I had an idea how to do it more elegantly and in a more general way.
But there's one thing that hasn't come up in this debate so far, and that is the composition of the team. It's one thing to tell anyone of the people on this site to do a quick hack to hit some market window but it's a completely different thing to tell the same to the average programmer out there.
If you ask someone, who has put a lot of thought into design issues during his (or sometimes her) career, to do a quick hack, you can be quite sure, that the code is at least readable and has some robustness to it. It might not be elegant, but it is also unlikely to be hazardous. On the other hand, I've seen unimaginable atrocities in many people's code. So I think you have to be careful not to apply the same principles regardless of the composition of your team. To have three Jazz legends do an improvised session can be art. To ask the same of a bunch of ManU supporters on the train between Picadilly station and Warrington, will produce quite different results.
And the second thing is that there are different kinds of hacks. There are hacks that can be contained and such that spill out into all parts of the system and make it unmanagable from the start. It might not be possible to replace all the quick solutions in one go or rewrite the whole system. So I find it important to enforce at least some form of layering or modularization from the start, otherwise it might be impossible to gradually improve the system later on.
It doesn't take much time to make it clear to everybody which parts of the system may or must not depend on which other parts. Or things like general exception handling and logging principles. If such a general architecure is in place, people can start to hack away inside particular modules and you can catch most of the spill out by properly testing it.
But speaking of testing, I'm a little uneasy about the current test driven development euphoria. Yes it's a good thing to write tests, no doubt about that, but there are certain areas where it takes a lot to convince me of the reliability of the test results. Such areas include everything to do with concurrency: threading, transactions, transaction isolation, locking, etc. In these areas, I need to understand exctly how things work. Empirical proof is not enough there. You want logical proof first.
It's very hard to write meaningful tests for these aspects, even if you know everything about things like the default transaction isolation level of your DBMS or your J2EE server's connection pooling facility, if your DBMS uses row versioning or write-locks, what sensitivity opened cursors have, which, if any, locking hints are used in your SQL and so forth. These are things where cutting corners is not a question of elegance or maintainability and where testing is anything but a panacea because to test these things properly takes a huge effort and requires very deep understanding of the issues involved.
So my conclusion is, you have to know the level of expertise of the people you ask to do a quick hack. You have to have a generally sound architecture in place. And you have to know where _not_ to cut the corners.
>But you can see how the the "bad" architecture propagated >itself. What I'm trying to do now is pay close attention >to quality while I'm building a new architecture, so that >the new architecture will propagate good design.
Hi Bill, this seems a rather frustrating cycle: without sufficient domain knowledge its more likely than not that your architectural decisions are going to be bad, these will propagate in the fashion you describe, but to make the right decisions in the first place is going to take too much time and therefore result in a cancelled project.
Taking a cynical view on things for a moment, it would seem inevitable that your first attempts at any "non-trivial" app in the domain are doomed in the sense they're not going to accommodate growth organically behind a certain level or they're going to get cancelled.
As other posters have implied, the only way to break this vicious cycle is to learn from your mistakes, strive for simplicity and be hungry to better your knowledge of the domain and best practices (refactoring, test-first, design patterns).
You must be already using increments. You didn't develop Artima in one shot, delivered the result and stopped. Instead, you started with a minimal set of features and you have regularly added new features based on their priorities.
But here is the way I understand what Alistair describes and what I am suggesting. What you are doing right now, questioning how you balance quality and time-to-market, this should be done more often.
You are mentioning yourself that you were new to the domain when you started. That happens a lot in our profession because there are always new technologies coming up and they are creating new domains. The way you do things cannot be the same when you're new at it as when you've done the same thing many times before.
Develop in smaller increments and adjust your process for each increment because each one may be different. Don't re-evaluate your process only a few years after starting the project. And do not stop here because there is no silver bullet.
I think that Jim Highsmith's "Agile Software Development Ecosystems" is a very good presentation of the rationale behind incremental development and agility in general.
Flat View: This topic has 59 replies
on 4 pages