The Artima Developer Community
Sponsored Link

Artima Developer Spotlight Forum
Test Optimization in Clover

1 reply on 1 page. Most recent reply: Dec 1, 2008 2:47 PM by andrew cooke

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 1 reply on 1 page
Laura Khalil

Posts: 1
Nickname: zazenergy
Registered: Nov, 2008

Test Optimization in Clover Posted: Nov 10, 2008 10:38 AM
Reply to this message Reply
Advertisement

One problem with test-driven development and automated testing is that teams often find themselves with long-running suites of tests that become a time killer in the iterative development process. If the tests take too long to run, developers are less likely to run the full suite locally before a commit.

Instead, they commit their changes untested, perhaps relying on the Continuous Integration (CI) server to actually run the tests. If that becomes the norm, the CI server gets quickly overloaded, and developers have to wait hours to find out they broke the build.

With the release of Clover 2.4, Atlassian added a new test optimization feature that can reduce build times by selectively running only the tests relevant to a particular change. That makes it practical for developers to run the full test suite locally prior to each commit. It also means that the CI server's throughput is greatly improved. Both of those outcomes mean faster feedback to development teams.

Fast feeback is key to team productivity. In many teams, it takes far too long for the impact of a code change to be known, even to the submitting developer. The developer might wait many minutes, or even hours, before the Continuous Integration server gets to building and testing their change. On the other hand, if each developer were to run the full test suite locally, the developer's machine is tied up running tests, leaving the developer expensively idle.

At the same time, build breakages often derail a whole development team, with all the work grinding to a halt while the spotlight shines on the developer who introduced the problem. If a particular change is going to cause one or more tests to fail, the team needs to know about it as fast as possible—preferably, before the code is committed.

Two approaches to smarter testing

Much testing effort is wasted because many tests are needlessly run: they do not test the code change that prompted the developer to run the tests in the first place. Thus, the first step to improving test times is to only run the tests applicable to the changed code. In practice, that is a huge win, with test runtimes dramatically reduced.

The second approach, in conjunction with the first or used independently, is to prioritize the tests that are run so as to flush out any test failures as quickly as possible. There are several ways to prioritize tests, based on the failure history of each test, running time, and coverage results.

Clover's new test optimization

As a code coverage tool, Clover measures per-test code coverage. That is, it measures which tests hit what code. Armed with this information, Clover can determine exactly which tests are applicable to a given source file. Clover uses this information, combined with information about which source files have been modified, to build a subset of tests applicable to a set of changed source files. That set is then passed to the test runner, along with any tests that failed in the previous build, and any tests that were added since the last build.

The set of tests composed by Clover for a test run can also be ordered using a number of strategies:

  • Failfast: Clover runs the tests in order of likeliness to fail, so any failure will happen as fast as possible.
  • Random: Running tests in random order is a good way to flush out inter-test dependencies.
  • Normal: no reordering is performed. Tests are run in the order they were given to the test runner.

Note that Clover will always run tests that are either new to the build or failed on the last run.

What do you think about Clover's test optimization? How do you decide which tests to run before a commit?


andrew cooke

Posts: 4
Nickname: acooke
Registered: Jun, 2007

Re: Test Optimization in Clover Posted: Dec 1, 2008 2:47 PM
Reply to this message Reply
nice idea (although "failfast" usually means something slightly different).

do any other code coverages tools already do this?

Flat View: This topic has 1 reply on 1 page
Topic: Decorator Module 3.0: When to Break Backward Compatibility in a Library? Previous Topic   Next Topic Topic: Xoreax's Latest IncrediBuild Features Grid-Based Build Distribution

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use