The Artima Developer Community
Sponsored Link

Agitating Thoughts & Ideas
Working Effectively With Characterization Tests
by Alberto Savoia
March 9, 2007
Summary
This is the first installment in a series of short articles I plan to write on characterization testing. Characterization testing is an important concept and an essential tool for software developers working with legacy code (i.e. most of us) and it deserves broader exposure and more attention.

Advertisement

This is the first installment in a series of short articles I plan to write on characterization testing. Characterization testing is an important concept and an essential tool for software developers stuck with, ahem, working with legacy code. Michael Feathers introduced the term and did a great job of explaining the hows and whys of characterization tests in his book “Working Effectively With Legacy Code”. If you are a software developer or manager working with legacy code, you really ought to buy the book and read it from beginning to end. But while you wait for your Amazon.com order to arrive, I hope you enjoy my little homage, introduction, and personal twist on working with characterization tests.

Part 1 – An Introduction To Characterization Tests

We typically think of a software test for a given piece of code as some sort of executable specification for that code; in other words, the test embodies the intended behavior of that code. When one of these tests fails, it usually means that the code does not do what it’s supposed to do according to some explicit, or implied, specification. Let’s call this type of test a specification test.

A thorough set of specification tests is a great thing to have if you have inherited and need to modify or augment an existing body of code. But what if that body of code does not come with an adequate set of such tests and the specifications are either non-existent, limited, or out of date? This situation is all too common. I would not be surprised if the majority of software developers today are working on code that they did not originally write, with only vague and partial specifications of what it is supposed to do, with few or no automated tests to ensure that their changes will not cause serious regressions in functionality. In other words, they are working with ... legacy code. Without proper specifications or tests, the only thing the developer has to work with is the existing code and its actual behavior: not what the code is supposed to do, but what it actually does. And in most cases, the job is to modify existing functionality or add new functionality without breaking anything else. It’s a tall order. What’s a poor developer to do?

Michael Feathers has the best set of answers to this challenging and complicated question, and explains his approach thoroughly in his excellent book “Working Effectively With Legacy Code”. In this series of short articles, I am going to focus on one of the most important and effective techniques presented in the book – characterization tests.

Michael Feathers defines characterization tests as tests that characterize the actual behavior of a piece of code. In other words, they don’t check what the code is supposed to do, as specification tests do, but what the code actually and currently does.

Having a set of characterization tests helps developers working with legacy code because they can run those tests after modifying their code and make sure that their modification did not cause any unintended or unwanted changes in functionality somewhere else.

Enough theory. Let’s create some characterization tests. Time for a Hello World example.

Manual Creation of Characterization Tests

Michael Feathers suggests a simple algorithm for writing characterization tests:

  1. Use a piece of code in a test harness.
  2. Write an assertion that you know will fail.
  3. Run the test and let the failure tell you what the actual behavior is.
  4. Change the test so that it expects the behavior that the code actually produces.
  5. Repeat.

Let’s assume that I have inherited the maintenance for a sales management system (oh joy!), and that I have to make some changes to the way commissions are calculated. The code below implements steps 1 and 2 in the suggested algorithm (using the JUnit testing framework):

public void testCalculateCommissionDue() {
        assertEquals(-42.42, SalesUtil.calculateCommissionDue(1000.0));
}

The piece of code I am using is the method calculateCommissionDue(), part of the Java class SalesUtil, and I expect my assertion that the commission on $1000.00 is -$42.42 to fail – unless this company has a truly original compensation plan for sales people.

I run the JUnit test (step 3) and get the following failure message:

junit.framework.AssertionFailedError: expected:<-42.42> but was:<200.0>

All right. It looks like for sales of $1000.0, the actual and current behavior says that the commission is $200.0. I don’t know if that’s right or wrong, but it looks like a reasonable value. I have to assume that, if neither the sales people nor the accounting people have not complained so far, the current behavior is what’s expected. Since people are particularly touchy when it comes to money, I’d better make sure that when I start making changes I don’t unintentionally change this behavior. So I modify the test to reflect the actual behavior (step 4):

public void testCalculateCommissionDue() {
        assertEquals(200.0, SalesUtil.calculateCommissionDue(1000.0));
}

I re-run the test and see that it passes. Cool, I have my first characterization test. It’s not much, but it’s a start.

What next? Step 5 says, repeat. Sounds a bit like the instructions on your shampoo bottle. doesn’t it? And if you are a programmer, I bet that you never repeat after you rinse. In this case, however, you do need to repeat because one test is probably not going to cut it. But how many time do you need to repeat? When can you stop testing and start changing the code? It all depends on the circumstances, and there are no easy answers. Michael Feathers provides the following heuristics for writing characterization tests:

  1. Write tests for the area where you will make your changes. Write as many test cases as you feel you need to understand the behavior of the code.
  2. After doing this, take a look at the specific things you are going to change, and attempt to write tests for those.
  3. If you are attempting to extract or move some functionality, write tests that verify the existence and connection of those behaviors on a case-by-case basis. Verify that you are exercising the code that you are going to move and that it is connected properly. Exercise conversions.

Wow. It sounds like a lot of work. Where to start? What am I going to do next with this example?

Well, you’ll have to wait for the next installment in the series, since I promised myself to keep each of part of this series short and sweet. In the meantime, why don’t you try your hand at writing some characterization tests on your own? Download some open source code that you are not familiar with (I suggest something very simple to get started) and see what you can do.

Talk Back!

Have an opinion? Readers have already posted 3 comments about this weblog entry. Why not add yours?

RSS Feed

If you'd like to be notified whenever Alberto Savoia adds a new entry to his weblog, subscribe to his RSS feed.

About the Blogger

Alberto Savoia is founder and CTO at Agitar Software, and he has been life-long agitator and innovator in the area of software development and testing tools and technology. Alberto's software products have won a number of awards including: the JavaOne's Duke Award, Software Development Magazine's Productivity Award, Java Developer Journal's World Class Award, and Java World Editor's Choice Award. His current mission is to make developer unit testing a broadly adopted and standar industry practice rather than a rare exception. Before Agitar, Alberto worked at Google as the engineering executive in charge of the highly successful and profitable ads group. In October 1998, he cofounded and became CTO of Velogic/Keynote (NASD:KEYN), the pioneer and leading innovator in Internet performance and scalability testing. Prior to Velogic, Alberto had 13-year career at Sun Microsystems where his most recent positions were Founder and General Manager of the SunTest business unit, and Director of Software Technology Research at Sun Microsystems Laboratories.

This weblog entry is Copyright © 2007 Alberto Savoia. All rights reserved.

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use