The Artima Developer Community
Sponsored Link

Weblogs Forum
Working Effectively With Characterization Tests

3 replies on 1 page. Most recent reply: Mar 14, 2007 7:48 AM by Alberto Savoia

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 3 replies on 1 page
Alberto Savoia

Posts: 95
Nickname: agitator
Registered: Aug, 2004

Working Effectively With Characterization Tests (View in Weblogs)
Posted: Mar 9, 2007 2:20 PM
Reply to this message Reply
Summary
This is the first installment in a series of short articles I plan to write on characterization testing. Characterization testing is an important concept and an essential tool for software developers working with legacy code (i.e. most of us) and it deserves broader exposure and more attention.
Advertisement

This is the first installment in a series of short articles I plan to write on characterization testing. Characterization testing is an important concept and an essential tool for software developers stuck with, ahem, working with legacy code. Michael Feathers introduced the term and did a great job of explaining the hows and whys of characterization tests in his book “Working Effectively With Legacy Code”. If you are a software developer or manager working with legacy code, you really ought to buy the book and read it from beginning to end. But while you wait for your Amazon.com order to arrive, I hope you enjoy my little homage, introduction, and personal twist on working with characterization tests.

Part 1 – An Introduction To Characterization Tests

We typically think of a software test for a given piece of code as some sort of executable specification for that code; in other words, the test embodies the intended behavior of that code. When one of these tests fails, it usually means that the code does not do what it’s supposed to do according to some explicit, or implied, specification. Let’s call this type of test a specification test.

A thorough set of specification tests is a great thing to have if you have inherited and need to modify or augment an existing body of code. But what if that body of code does not come with an adequate set of such tests and the specifications are either non-existent, limited, or out of date? This situation is all too common. I would not be surprised if the majority of software developers today are working on code that they did not originally write, with only vague and partial specifications of what it is supposed to do, with few or no automated tests to ensure that their changes will not cause serious regressions in functionality. In other words, they are working with ... legacy code. Without proper specifications or tests, the only thing the developer has to work with is the existing code and its actual behavior: not what the code is supposed to do, but what it actually does. And in most cases, the job is to modify existing functionality or add new functionality without breaking anything else. It’s a tall order. What’s a poor developer to do?

Michael Feathers has the best set of answers to this challenging and complicated question, and explains his approach thoroughly in his excellent book “Working Effectively With Legacy Code”. In this series of short articles, I am going to focus on one of the most important and effective techniques presented in the book – characterization tests.

Michael Feathers defines characterization tests as tests that characterize the actual behavior of a piece of code. In other words, they don’t check what the code is supposed to do, as specification tests do, but what the code actually and currently does.

Having a set of characterization tests helps developers working with legacy code because they can run those tests after modifying their code and make sure that their modification did not cause any unintended or unwanted changes in functionality somewhere else.

Enough theory. Let’s create some characterization tests. Time for a Hello World example.

Manual Creation of Characterization Tests

Michael Feathers suggests a simple algorithm for writing characterization tests:

  1. Use a piece of code in a test harness.
  2. Write an assertion that you know will fail.
  3. Run the test and let the failure tell you what the actual behavior is.
  4. Change the test so that it expects the behavior that the code actually produces.
  5. Repeat.

Let’s assume that I have inherited the maintenance for a sales management system (oh joy!), and that I have to make some changes to the way commissions are calculated. The code below implements steps 1 and 2 in the suggested algorithm (using the JUnit testing framework):

public void testCalculateCommissionDue() {
        assertEquals(-42.42, SalesUtil.calculateCommissionDue(1000.0));
}

The piece of code I am using is the method calculateCommissionDue(), part of the Java class SalesUtil, and I expect my assertion that the commission on $1000.00 is -$42.42 to fail – unless this company has a truly original compensation plan for sales people.

I run the JUnit test (step 3) and get the following failure message:

junit.framework.AssertionFailedError: expected:<-42.42> but was:<200.0>

All right. It looks like for sales of $1000.0, the actual and current behavior says that the commission is $200.0. I don’t know if that’s right or wrong, but it looks like a reasonable value. I have to assume that, if neither the sales people nor the accounting people have not complained so far, the current behavior is what’s expected. Since people are particularly touchy when it comes to money, I’d better make sure that when I start making changes I don’t unintentionally change this behavior. So I modify the test to reflect the actual behavior (step 4):

public void testCalculateCommissionDue() {
        assertEquals(200.0, SalesUtil.calculateCommissionDue(1000.0));
}

I re-run the test and see that it passes. Cool, I have my first characterization test. It’s not much, but it’s a start.

What next? Step 5 says, repeat. Sounds a bit like the instructions on your shampoo bottle. doesn’t it? And if you are a programmer, I bet that you never repeat after you rinse. In this case, however, you do need to repeat because one test is probably not going to cut it. But how many time do you need to repeat? When can you stop testing and start changing the code? It all depends on the circumstances, and there are no easy answers. Michael Feathers provides the following heuristics for writing characterization tests:

  1. Write tests for the area where you will make your changes. Write as many test cases as you feel you need to understand the behavior of the code.
  2. After doing this, take a look at the specific things you are going to change, and attempt to write tests for those.
  3. If you are attempting to extract or move some functionality, write tests that verify the existence and connection of those behaviors on a case-by-case basis. Verify that you are exercising the code that you are going to move and that it is connected properly. Exercise conversions.

Wow. It sounds like a lot of work. Where to start? What am I going to do next with this example?

Well, you’ll have to wait for the next installment in the series, since I promised myself to keep each of part of this series short and sweet. In the meantime, why don’t you try your hand at writing some characterization tests on your own? Download some open source code that you are not familiar with (I suggest something very simple to get started) and see what you can do.


disney

Posts: 35
Nickname: juggler
Registered: Jan, 2003

Re: Working Effectively With Characterization Tests Posted: Mar 13, 2007 3:56 AM
Reply to this message Reply
The simple ideas are the best. It isn't safe to refactor your legacy code until you have something to confirm that its function remains unchanged. And if you can't refactor your code, it will die of software rot. Characterisation tests: great idea. Don't rely on them alone, make them part of your arsenal, though.

Michael Moerman

Posts: 1
Nickname: mmrm
Registered: Jun, 2003

Re: Working Effectively With Characterization Tests Posted: Mar 14, 2007 2:20 AM
Reply to this message Reply
I do not know what is comming in the next articles, but it sounds to me that this is a perfect candidate for black-box testing. You record the results of the outcome of a series of black-box tests on the piece of code/interface and then you make these results the expected ones. Once you have changed the code you go at it again and make sure that the results comming out of the "black box" are still the same. Or is there a nuance that I did not grasp?

Alberto Savoia

Posts: 95
Nickname: agitator
Registered: Aug, 2004

Re: Working Effectively With Characterization Tests Posted: Mar 14, 2007 7:48 AM
Reply to this message Reply
Hi Michael, as you will see in part 2 (which I posted yesterday), I believe that when writing characterization tests you are not only allowed, but encouraged, to look at the code.

Black-box testing without any documentation or specification is too much of a hit-or-miss affair. In this particular example, without looking at the code how would you know what values to use for testing? The input parameter is a double and you have a virtually infinite number of possible values to use.

By looking at the code, you have a chance to discover the equivalence partitions (see http://en.wikipedia.org/wiki/Equivalence_partitioning if you are not familiar with the term) to help you narrow down the set of test cases required to achieve the desired coverage.

Please let me know if I made the point clearly enough in part 2.

Thanks,

Alberto

Flat View: This topic has 3 replies on 1 page
Topic: Working Effectively With Characterization Tests Previous Topic   Next Topic Topic: PyCon 2007 Review

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use