The Artima Developer Community
Sponsored Link

Weblogs Forum
Testing vs. Reviews

9 replies on 1 page. Most recent reply: Jul 30, 2006 9:45 AM by Mike Tocci

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 9 replies on 1 page
Bruce Eckel

Posts: 875
Nickname: beckel
Registered: Jun, 2003

Testing vs. Reviews (View in Weblogs)
Posted: Jul 5, 2006 11:42 AM
Reply to this message Reply
Summary
Some studies have shown that reviews are a far cheaper and more efficient approach to error removal than testing. These studies don't suggest that reviews should replace testing, but that you are missing out on some big economic leverage if you don't use reviews.
Advertisement

And yet, in all the years that I have offered design reviews, code reviews, and walkthroughs, only one client has ever used these services. But in that case the president of the startup company was someone who had previously been a VP in software development and had a number of successes under his belt. He knew from experience the value of saying "Let's step back and look at this again," and thus there was buy-in for the concept at the highest level.

It may simply be an issue of maturity in software development organizations. My experience is that the seminar is the easiest package for an organization to consume, because it has a clear definition and boundaries. It's really a kind of consulting, but you have a pretty good idea of what you're getting. And "doing training" is a well-accepted concept in the corporate software development world; indeed, some organizations have "training managers."

Consulting is a dicier business, especially because a stigma can easily be attached to "needing a consultant." (And I'm speaking here of consulting as giving advice about a project, rather than someone who does the work or is a temporary employee). In some environments, if you bring in a consultant, it could be taken as meaning "you/your team can't do the work yourself." In the technical world, it's all about what you know, so admitting that you don't know something can be akin to exposing your throat to the alpha dog.

Ironically, I learned long ago that it was a waste of my time to smile and nod as if I understood the conversation, just to be able to pretend that I knew something. Especially if I then turned around and reiterated my misunderstanding to another smile-and-nodder, so that eventually everyone got the wrong idea. Much better to just suck it up and say "I don't know what that means." I try to be very aggressive with myself about this, telling myself not to let anything slide and to ask the question. This has the important secondary benefit of discovering whether the person explaining the topic really knows what they're talking about.

My perception is that for an organization to be able to consume consulting, they must have that same attitude, but on a group scale. Maturity is when you don't say "uh oh, here's something that I don't know, and maybe other people will discover that and think that I'm stupid." Instead, you have enough experience to know that you are competent and valuable, and yet "I know some things, but there is a ton of stuff I don't know, and if I never ask the question I'll never learn any more."

If you are evaluating a project and you discover a risk, the immature thing to do is ignore the risk and pretend it will go away or won't happen. The mature thing to do is to manage that risk. This usually means finding everything out about it that you can, and creating a plan to apply mitigation factors if it comes up.

We are naturally drawn to the happy path. In fact, our minds tend to filter out the difficult and unpleasant experiences, so we rapidly forget the monumental struggles and remember fondly how easily a program went together, how we waved our hands and everything magically coalesced. So the next project comes up, we remember how easy it was to do the last one, and we schedule and do risk assessment based on that illusion. No wonder so many projects fail.

On a number of occasions, I've made the point that it's more likely that your project will fail rather than succeed. This is usually a jarring idea. People believe that, in theory, someone else's project has the odds stacked against it, but not this one. This one is special. And if you say otherwise, then you're a negative thinker and not a team player and you aren't committing completely enough to (insert management wonder solution of the month here).

The problem with that approach is that you are assuming the happy path, which is incredibly unlikely. If you instead accept that the odds are against you -- not in a defeatist way, but to see the reality of the situation from the start -- then you can start off with the likelihood of failure staring you in the face, so you can say "what are we going to do about this?" This is, for example, why I say "if it's not tested, it's broken," because then I'm not surprised when something fails. I'm only surprised that it slipped through the tests, and that's something I can fix.

Economics are an important reality in software projects. You have limited resources (which includes time), and your success or failure depends on how you allocate those resources. Which brings us back to the question of "if reviews are so useful and cost-effective, why don't software organizations do more of them?"

This could easily be a lack of experience, just as unit testing has been in the past. If you apply the "it works for me" correctness metric to your code, then unit testing seems to be a waste of time. And the idea may be foreign to your current way of thinking, so you may push it away. And no amount of discussion can change your mind, but a single experience with unit-tested code can. You make a change, the unit tests go off, and you realize that the bug would have been buried, or it would have taken you a lot of time and pain to discover it. That's the point at which you say, "testing is essential," because you know that it saves you time. But if you are never exposed to it, you never get the chance to have that experience.

This may be the same with design and code reviews. If they aren't very common, then people don't get a chance to see how valuable they might be. Another problem might be that everyone within the group feels like they understand the design or code well enough and going over it again seems boring or a waste of time. This is why it's useful to get someone outside the group to lead the review, either someone else in the company if it's big enough, or a consultant (which is why I offer the service); in either case someone who might be able to offer new information or ideas to the process, and thus make it more enticing for people to participate.

How is it done in your organization? Do you have design and code reviews or avoid them? If you have done reviews, was it a clear win in everyone's mind, or were people unconvinced that it was a good use of time?


The information about the review studies came from the chapter An Experimental View of Software Error Removal, in Software Conflict 2.0 by Robert L. Glass, d.* books, 2006.


Kevin Klinemeier

Posts: 7
Nickname: zipwow
Registered: Dec, 2002

Re: Testing vs. Reviews Posted: Jul 5, 2006 1:49 PM
Reply to this message Reply
I wholeheartedly agree. The problem that I always see is that code reviews, as they're often implemented, feel like a waste of time and a drag on the project schedule.

To combat that, we made these changes:

1. Print no code. All code is shown on Eclipse on an overhead.
2. Assign no new takss. Ecipse is already up, so we make our changes in place. We run the unit tests and check in at the end of it.
3. If a major issue is discovered, add it to the project schedule in terms the customer can understand well enough to prioritize.

Even during crunch times, code review feels like a nice break from coding where we examine other problems, rather than an hour long meeting where we all get more work to do.

The group that meets must:

1. Have a combination of new and experienced developers
2. Have no more than 5 people
3. Understand that they're allowed to make changes.

Once we started with this approach, code review is one of the most valuable parts of our process.

John M Camara

Posts: 15
Nickname: camara
Registered: Dec, 2004

Re: Testing vs. Reviews Posted: Jul 5, 2006 8:37 PM
Reply to this message Reply
I agree that both tests and code reviews play key roles in successful software projects. Eliminating either of them just increases the chance that your project will join the same faith that a majority of projects have. That is, they will be considered a failure.

I believe by now it is safe to assume that the majority of software developers and managers of software projects have come to accept the importance of testing so I feel it's unnecessary to comment on testing. As far as code reviews are concerned I see them as having 2 benefits.

The obvious one is having a second pair of eyes reviewing the code that can provide insights that testing alone do not offer. As a developer it comes natural to have preconceived ideas on how to attack some problem at hand and it is often difficult for one to see a more effective approach that might be available as we convince ourself that we already know the right solution. Now if you go and write unit tests first, before you write the code, that forces us to have a different perspective on the problem and we will likely be willing to accept different approaches to our original ideas. But we are human after all, so this approach although effective quite often is not fool proof. Having a second pair of eyes allows for a new perspective on the problem and increases the chances of coming up with an optimum solution.

Now as important as it may be to have a second pair of eyes on a problem I feel it's not the most important benefit of code reviews. I feel that code reviews provide a means of mentoring each other. This form of mentoring is likely to be the only form of mentoring that the majority of developers experience these days. After all, mentoring has lost most, if not all, priority in these sad times of ever decreasing costs at all costs. We have simply forgotten how important it is to pass collective experiences from generation to generation.

My preferred form of code review is via paired programming as I feel it's the most effective form I have experienced. When that is not acceptable or practical I find reviewing change sets as the next best thing. If you use a tool such as Trac http://projects.edgewall.com/trac/ these reviews become somewhat trivial to perform as the tool provides a time line of the changes and a nice interface into the repository to view differences that were applied to each change set. Now for this approach to be effective it is necessary to use a version control system that supports atomic commits (revisions to the repository instead of revisions to individual files). That way related changes are grouped together, making reviews easier to perform. It also becomes necessary to commit often so that groups of changes are not mixed with one another. This form of code review does nothing to support mentoring unless the reviewer makes an effort to provide comments to the author of the change set.

I'm not a big fan of doing code reviews by groups as I find that few people are not willing to be active participants in such reviews so this effectively becomes a waste of time to the majority of the participants. Although this is just my opinion when reviewing lines of code but I do believe reviews of high level software artifacts are effective with groups.

Why do people have issues hiring a consultant for ideas? I believe it's, as a rule, due to most people being cowards. Most people do not want to admit to not understanding something as they see that as a sign of weakness and will be punished for showing such signs. A true leader on the other hand recognizes that he does not know everything and is well aware that it is far more cost effective to learn through others experiences than learn the same things the hard way by oneself. Although anything you learn yourself the hard way is one you are far more likely to not forget and gives you better insight.

These same cowards are afraid of failure. True leaders on the other hand see failures as an opportunity to learn valuable insights and generally don't see failure as a big issue as long as one does not continue to fail in the same way. Sadly most organizations do not recognize that some who have failures have them because they are pursuing stretch goals but would rather reward those who are successful at pursuing mediocre goals.

So I guess if you want to increase your consulting you first need to teach people to be leaders first so they have the courage to ask for help when they need it. Good luck.

John

Brian Slesinsky

Posts: 43
Nickname: skybrian
Registered: Sep, 2003

Re: Testing vs. Reviews Posted: Jul 5, 2006 11:29 PM
Reply to this message Reply
There are good and bad ways to do code reviews, but generally I think they come too late in the process to do a lot of good. I recently wrote about this on my blog:

http://slesinsky.org/brian/code/pair_versus_review.html

Henry Ludemann

Posts: 3
Nickname: asdf1011
Registered: May, 2006

Re: Testing vs. Reviews Posted: Jul 6, 2006 6:06 AM
Reply to this message Reply
We review before delivery to integration, with development consisting of short lived branches (in the order of weeks) which are quick to review.

We don't peer review (ie: developer and reviewer sitting down together); instead the developer asks another developer to review the changes at their leisure (depending on the size of the change, usually within a few hours). As John said, having a tool such as trac makes it considerably easier (we use a web frontend to clearcase to allow quick code reviews).

Major design changes are typically group reviewed before the actual changes are made.

I've found this sort of practice quite benificial; peer reviews tend to a while to get organised and can break the reviewer out of whatever task they were working.

The quicker & easier the reviewer can see the changes typically results in far faster turnaround for reviews (especially desirable at release time when there are lots of small fixes that need reviewing).

Keith Ray

Posts: 658
Nickname: keithray
Registered: May, 2003

Re: Testing vs. Reviews Posted: Jul 6, 2006 6:59 AM
Reply to this message Reply
On a project I was on (medical software), where we did formal code reviews, we reviewed both the unit tests and the code. Without the reviews, the unit tests would have been hugely ineffective because the programmers just didn't think about what could go wrong with the code. (We ended up requiring a LOT of post-review changes to improve the unit tests.)

I can imagine that a project with unit tests (written after the code, rather than test-first) but with no code review or pair programming, could have poor test coverage. Adding code reviews to such a project would likely turn up many bugs that the unit tests didn't catch. This doesn't mean that unit tests per se are not effective, it just means that many programmers need to improve their unit-testing skills.

Dino Octavian

Posts: 15
Nickname: dinoo33
Registered: Jul, 2006

Re: Testing vs. Reviews Posted: Jul 6, 2006 4:54 PM
Reply to this message Reply
In 16 years I worked in only one group that had proper code reviews. And I don't mean just looking at header files and indentation, but actually reading code for correctness.

At the end of the project we went through an interesting experiment. The team-lead picked up a number of modules and seeded errors (typical and atypical) in the code. Then we pushed everything through the QA pipeline. The results were more or less: code reviews caught about 60% of the errors, testing caught about 25%, customer testing caught another 10% or so. 5% of the errors were still in the code. Since the number of found defects was known, we could extrapolate what was left in the code. Does anybody else have a similar metric?

Quality wise, the product was rock-solid. Budget and schedule wise though, the project was in a bad shape. The customer understood where the delays were coming from and they were supportive ... ummm, actually they were late on their side more than us, so they called it even :) In the next 2 years only very few problems were reported and the company actually made some money in support.

Since 1993, I worked for various companies and did all sort of "code reviews", but in most cases they were just for enforcing some level of conformance to this or that coding standard. The review process had nothing to do with finding errors in the code. Almost useless. Most companies I worked for, rely strictly on testing for QA. I know of only one single other software shop were reviews are done as they should. And the quality of their product is very high.

My conclusion is code reviews are effective, but expensive. Each product has to carefully and realistically lay down its quality expectations vs. liabilities and expected support costs. Then they can approach the quality problem from the right angle.

One last thing, according to DeMarco ("The Deadline") code review/inspection may not be required in the final stages of development. The symptom is: "No defects found".

Lidor Wyssocky

Posts: 1
Nickname: lidor
Registered: Jul, 2006

Re: Testing vs. Reviews Posted: Jul 10, 2006 10:49 AM
Reply to this message Reply
Hi Bruce,

Your observations and analysis are accurate: I think most people don't practice code and design reviews because they still don't acknowledge their benefit.

However, I found that a different approach to code and design reviews might actually make the difference.

I see code and design reviews as a mentoring platform. For me a review session has two goals. The first is, of course, perfecting the product under development. The second goal, which is as important, is to improve the skills of the developer and to build his experience in a guided manner. The two goals are actually achieved using the same resources and at the same time.

Of course, in order to achieve both goals the review process is somewhat different than the "normal" peer review. The reviews I refer to are conducted by a professional mentor which is not necessarily part of the development team. This also helps the development team reduce the overhead of the review process. In fact, when done on a weekly basis the overhead to the development team is approximately 1 hour per week per developer.

I invite you to read more about this process at:
http://blog.qualityaspect.com/2006/04/05/are-your-code-reviews-effective/
and
http://lucidquality.qualityaspect.com/Overview/

Lidor

Glyph Lefkowitz

Posts: 12
Nickname: glyph
Registered: Nov, 2003

Re: Testing vs. Reviews Posted: Jul 13, 2006 7:39 AM
Reply to this message Reply
On a distributed team, we've instititued comprehensive peer review as a substitute for pair programming. So far it's worked out remarkably well. Our process is described here:

http://divmod.org/trac/wiki/UltimateQualityDevelopmentSystem

I'm still looking for enough free time to write up our experiences thus far, but it's been shockingly good. Although I don't believe there necessarily needs to be an either-or proposition between code reviews and pair programming, reviews have some interesting advantages. For example, when pair programming, the excuse for some shortcut that subverts the code's architecture can "seem like a good idea at the time". Such shortcuts look remarkably different if you are not watching as the code is being written, however, and reviewing it as a whole which has to be understood on its own without the benefit of interactive communication. Forcing them to be fixed before the code actually ends up in the main line of development can prevent a lot of expedient hacks making the code incomprehensible.

Mike Tocci

Posts: 2
Nickname: miketocci
Registered: Aug, 2004

Re: Testing vs. Reviews Posted: Jul 30, 2006 9:45 AM
Reply to this message Reply
The problem with code reviews is that the reviewers are not given enough time to peform the review. I have often had to attend reviews where the reviewer barely had time to read the code before the meeting. So you end up getting a lot of obvious comments about naming conventions or unhandled exceptions that you could just as easily get from PMD or Hammurapi. To review the code, you have to review the use case, requirements, and design. That takes time and effort but management generally doesn't understand that; they tend to see the code review as a minor task that you can just squeeze into your work schedule without any impact on your other work.

Flat View: This topic has 9 replies on 1 page
Topic: Testing vs. Reviews Previous Topic   Next Topic Topic: Can Provably Correct Code Exist?

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use