The Artima Developer Community
Sponsored Link

Agile Buzz Forum
Test Duration

0 replies on 1 page.

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 0 replies on 1 page
Keith Ray

Posts: 658
Nickname: keithray
Registered: May, 2003

Keith Ray is multi-platform software developer and Team Leader
Test Duration Posted: Dec 4, 2005 11:34 AM
Reply to this message Reply

This post originated from an RSS feed registered with Agile Buzz by Keith Ray.
Original Post: Test Duration
Feed Title: MemoRanda
Feed URL: http://homepage.mac.com/1/homepage404ErrorPage.html
Feed Description: Keith Ray's notes to be remembered on agile software development, project management, oo programming, and other topics.
Latest Agile Buzz Posts
Latest Agile Buzz Posts by Keith Ray
Latest Posts From MemoRanda

Advertisement

Extreme Programming is built on the assumption that completely testing the product under development can be done within a single iteration. If manual testing is the only method of testing, the test-duration will get longer every iteration until it can't be done within a single iteration. This is why automated testing is so important on an XP project. In well-run XP projects, automated testing covering every feature can be executed, at worse, overnight or in one or two days, and at best, as quickly as 20 minutes or faster. And that's two layers of automated tests: programmer/unit tests, and customer/acceptance tests.

The reason XP has iterations and automated testing is to increase the speed of feedback on the design. On a large project, a design that meets all the requirements is going to require a long time to be developed. No person or team can come up with an "instant" design for all the requirements. In a phasist approach, a team might spend a long time coming up a design that they think will meet all the requirements, and not have any real feedback on the goodness of that design until they start implementing it and testing it. If it turns out the design is bad, the project may not have enough time to fix the design before shipping deadline.

Contrary to some popular misconceptions, XP team do design, but they do it incrementally, creating tests for that design in each iteration. The implementation for that design isn't considered complete until it passes all the tests that the programmers (doing test-driven development) and the testers (creating customer/acceptance tests) think up. Most importantly, refactoring is design, as the design that was appropriate for the last iteration is changed to be appropriate for the current iteration.

Fortunately for people doing incremental design work with refactoring, most automated tests do not have to change when the code is changed. The tests are acting as executable specifications for the requirements, so if the requirements haven't changed, then the tests should not have to change. Of course, when the tests are very close to the implementation, they do sometimes need to be change, but the majority of tests should not change when someone need to alter the design to handle requirements being address in the current iteration.

If someone is making changes to the code, and the tests fail, or the code isn't even executable or compilable any more, they are not doing refactoring. They are doing something else, like rewriting, Stop them. Teach them how to do refactorings that are in Martin Fowler's book: each refactoring is a small behavior-preserving change that keeps the code and tests working. Run the programmer tests after each refactoring for assurance that no behaviors have been changed unintentionally. Many people are using refactoring tools that can do refactorings that are almost always error-free, though running the tests is still recommended, just in case. Manual refactoring is more prone to error, so be careful and work slowly in order to avoid mistakes (and therefore go faster). Make sure the whole team knows the direction the refactorings are headed in, so you don't get into cycles where one developer undoes the refactorings of another developer and vice-versa.

Teams that haven't gotten their specification and testing processes firmly in place will experience what Michael Feathers calls iteration slop and "trailer-hitched QA":

When your team does trailer-hitched QA the developers work until the last possible moment within the iteration and when the iteration ends, they hand off to the QA team. The QA team gets the "final" build and they work with it for a while, running more automated and manual tests. Often this takes close to the length of an iteration. [...] Effectively, trailer-hitched QA doubles the amount of time that it takes for a team to really know that they finished an iteration. It can work, but it is kind of counter-productive in a process which aims to shorten the amount of time it takes to get feedback.

What happens when there are conflicting requirements? In a large project, it's often hard to detect conflicting requirements because you just can't hold them all in your head at the same time. You could spend some time comparing every requirement to every other requirement, but that's tedious and you're still likely to miss the conflicting requirements -- particularly when some requirements are (as yet) unwritten and/or "derived" requirements.

On projects without fast automated tests, conflicting requirements show up as a cycle of bug reports, often with long delays between bug reports. Fixing one bug causes another bug to show up, perhaps weeks later when testing gets to that conflicting requirement. Fixing that other bug causes the original bug to re-occur -- but it may be flagged as a new bug if no one reviews the old "fixed" bugs before filing a new bug report.

When every high-level requirement is written as an automated customer/acceptance test, and every low-level requirement is written as a programmer/unit test something interesting happens: getting one conflicting requirement/test to pass will cause another test to fail. Getting that other one to pass will cause the original one to fail. If you run all the tests often, and they run quickly, it becomes pretty obvious that a cycle is going on.

Many XP teams create new tests whenever they are going to fix a bug. The test fails because of the bug, and will pass when the bug is fixed. In practice (particularly with legacy code), this can be very difficult because to write a failing test, you first have to find the buggy code. However, having that test in place will help detect conflicting requirements if you later fix a bug and cause the previous bug-detecting-test to fail again.

What do you do with legacy code? You can segregate it: do test-driven development on new code, avoid changing the old code, move old features into the new tested-code gradually until the legacy code disappears. This assumes the legacy code is mostly bug-free. If you have to fix bugs in legacy code often, your best strategy is to put the old code under test, starting with the parts you are about to modify for bug-fixes. See Working Effectively With Legacy Code by Michael Feathers for techniques on how to introduce testability into legacy code. You will also have to spend more time on manual testing, and probably have to have longer iterations in which to do it.

Read: Test Duration

Topic: What if… Previous Topic   Next Topic Topic: The geek repo man

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use