The Artima Developer Community
Sponsored Link

Computing Thoughts
Testing vs. Reviews
by Bruce Eckel
July 5, 2006
Summary
Some studies have shown that reviews are a far cheaper and more efficient approach to error removal than testing. These studies don't suggest that reviews should replace testing, but that you are missing out on some big economic leverage if you don't use reviews.

Advertisement

And yet, in all the years that I have offered design reviews, code reviews, and walkthroughs, only one client has ever used these services. But in that case the president of the startup company was someone who had previously been a VP in software development and had a number of successes under his belt. He knew from experience the value of saying "Let's step back and look at this again," and thus there was buy-in for the concept at the highest level.

It may simply be an issue of maturity in software development organizations. My experience is that the seminar is the easiest package for an organization to consume, because it has a clear definition and boundaries. It's really a kind of consulting, but you have a pretty good idea of what you're getting. And "doing training" is a well-accepted concept in the corporate software development world; indeed, some organizations have "training managers."

Consulting is a dicier business, especially because a stigma can easily be attached to "needing a consultant." (And I'm speaking here of consulting as giving advice about a project, rather than someone who does the work or is a temporary employee). In some environments, if you bring in a consultant, it could be taken as meaning "you/your team can't do the work yourself." In the technical world, it's all about what you know, so admitting that you don't know something can be akin to exposing your throat to the alpha dog.

Ironically, I learned long ago that it was a waste of my time to smile and nod as if I understood the conversation, just to be able to pretend that I knew something. Especially if I then turned around and reiterated my misunderstanding to another smile-and-nodder, so that eventually everyone got the wrong idea. Much better to just suck it up and say "I don't know what that means." I try to be very aggressive with myself about this, telling myself not to let anything slide and to ask the question. This has the important secondary benefit of discovering whether the person explaining the topic really knows what they're talking about.

My perception is that for an organization to be able to consume consulting, they must have that same attitude, but on a group scale. Maturity is when you don't say "uh oh, here's something that I don't know, and maybe other people will discover that and think that I'm stupid." Instead, you have enough experience to know that you are competent and valuable, and yet "I know some things, but there is a ton of stuff I don't know, and if I never ask the question I'll never learn any more."

If you are evaluating a project and you discover a risk, the immature thing to do is ignore the risk and pretend it will go away or won't happen. The mature thing to do is to manage that risk. This usually means finding everything out about it that you can, and creating a plan to apply mitigation factors if it comes up.

We are naturally drawn to the happy path. In fact, our minds tend to filter out the difficult and unpleasant experiences, so we rapidly forget the monumental struggles and remember fondly how easily a program went together, how we waved our hands and everything magically coalesced. So the next project comes up, we remember how easy it was to do the last one, and we schedule and do risk assessment based on that illusion. No wonder so many projects fail.

On a number of occasions, I've made the point that it's more likely that your project will fail rather than succeed. This is usually a jarring idea. People believe that, in theory, someone else's project has the odds stacked against it, but not this one. This one is special. And if you say otherwise, then you're a negative thinker and not a team player and you aren't committing completely enough to (insert management wonder solution of the month here).

The problem with that approach is that you are assuming the happy path, which is incredibly unlikely. If you instead accept that the odds are against you -- not in a defeatist way, but to see the reality of the situation from the start -- then you can start off with the likelihood of failure staring you in the face, so you can say "what are we going to do about this?" This is, for example, why I say "if it's not tested, it's broken," because then I'm not surprised when something fails. I'm only surprised that it slipped through the tests, and that's something I can fix.

Economics are an important reality in software projects. You have limited resources (which includes time), and your success or failure depends on how you allocate those resources. Which brings us back to the question of "if reviews are so useful and cost-effective, why don't software organizations do more of them?"

This could easily be a lack of experience, just as unit testing has been in the past. If you apply the "it works for me" correctness metric to your code, then unit testing seems to be a waste of time. And the idea may be foreign to your current way of thinking, so you may push it away. And no amount of discussion can change your mind, but a single experience with unit-tested code can. You make a change, the unit tests go off, and you realize that the bug would have been buried, or it would have taken you a lot of time and pain to discover it. That's the point at which you say, "testing is essential," because you know that it saves you time. But if you are never exposed to it, you never get the chance to have that experience.

This may be the same with design and code reviews. If they aren't very common, then people don't get a chance to see how valuable they might be. Another problem might be that everyone within the group feels like they understand the design or code well enough and going over it again seems boring or a waste of time. This is why it's useful to get someone outside the group to lead the review, either someone else in the company if it's big enough, or a consultant (which is why I offer the service); in either case someone who might be able to offer new information or ideas to the process, and thus make it more enticing for people to participate.

How is it done in your organization? Do you have design and code reviews or avoid them? If you have done reviews, was it a clear win in everyone's mind, or were people unconvinced that it was a good use of time?


The information about the review studies came from the chapter An Experimental View of Software Error Removal, in Software Conflict 2.0 by Robert L. Glass, d.* books, 2006.

Talk Back!

Have an opinion? Readers have already posted 9 comments about this weblog entry. Why not add yours?

RSS Feed

If you'd like to be notified whenever Bruce Eckel adds a new entry to his weblog, subscribe to his RSS feed.

About the Blogger

Bruce Eckel (www.BruceEckel.com) provides development assistance in Python with user interfaces in Flex. He is the author of Thinking in Java (Prentice-Hall, 1998, 2nd Edition, 2000, 3rd Edition, 2003, 4th Edition, 2005), the Hands-On Java Seminar CD ROM (available on the Web site), Thinking in C++ (PH 1995; 2nd edition 2000, Volume 2 with Chuck Allison, 2003), C++ Inside & Out (Osborne/McGraw-Hill 1993), among others. He's given hundreds of presentations throughout the world, published over 150 articles in numerous magazines, was a founding member of the ANSI/ISO C++ committee and speaks regularly at conferences.

This weblog entry is Copyright © 2006 Bruce Eckel. All rights reserved.

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use