Next-Generation Testing with TestNG

An Interview with Cédric Beust

by Frank Sommers
August 22, 2006

TestNG is a Java unit testing framework that aims to overcome many limitations of JUnit. In this interview with Artima, TestNG creator Cédric Beust describes what JUnit deficiencies TestNG aims to solve, and talks about some common unit testing misconceptions, including the dangers of overt focus on obtaining complete test coverage. He also explains the benefits of data-driven testing, and provides a preview of some new TestNG features.

While JUnit pioneered Java unit testing, and still remains the most popular Java testing tool, JUnit also comes with many limitations that impose restrictions on developers' testing habits. TestNG, an open-source Java unit testing tool, was created to overcome JUnit's key limitations, and to provide additional features needed to test the latest generation Java applications.

TestNG recently reached its 5.0 release. On that occasion, Artima spoke with TestNG creator Cédric Beust about unit testing and TestNG. In this interview, he shares what he considers common misconceptions about unit testing, what JUnit deficiencies led him to write TestNG, and how TestNG helps work with a large number of unit tests.

In addition to creating TestNG, Cédric Beust is a senior engineer at Google, and is an active member of the Java Community Process (JCP), having been involved in various aspects of the development of the latest Java release. This interview took place over email, and Artima edited the replies for readability.

Frank Sommers: You describe TestNG as a next-generation testing framework. What do you mean by next-generation testing, and what problems does TestNG address that are not also addressed in JUnit?

Cédric Beust: JUnit was explicitly designed for unit testing. What I mean by unit testing is testing a class in isolation of all other classes. Unit testing is an important part of testing, but there are other aspects of testing that I thought JUnit didn't make easy, such as regression, integration, or functional testing. Thus, while I found it easy to test classes with JUnit, when it came to testing full applications or entire systems, JUnit didn't offer a lot of flexibility.

For instance, setUp() and tearDown() methods are useful to configure test methods, but I often found the need to do similar initialization around test classes or around test suites. An example of class-level setup is when you need to create an expensive object for the duration of a test.

In general, poor configuration control in JUnit also means that there is no easy way to maintain state between invocation calls. That's because JUnit re-instantiates your objects from scratch between each call, forcing you to use statics to maintain state. JUnit also does not provide an easy way to pass parameters to each test method.

As well, JUnit forces you to recompile your tests when you want to run different sets of tests. As a result, most developers resort to commenting out test or suite methods. I wanted to compile my tests once, and then have a flexible runtime that lets me invoke subsets of tests—such as running database tests only, or fast tests only—without having to recompile anything. Hand in hand with that goes the inability to re-run only the tests that failed in a test run. And with JUnit you also cannot test methods that depend on each other.

In addition to that, prior to JUnit 4, JUnit was still using the [JDK] 1.4 API, providing no understanding of generics or annotations. Among other issues is JUnit's lack of support for data-driven testing, or support for running tests in parallel. These are just a few examples. TestNG sports a lot more features that I couldn't find in JUnit.

Frank Sommers: Many developers new to unit testing believe that they should aim for hundred-percent unit test coverage of their code. However, as developers gain more experience in test-driven development, they often realize that some code does not require unit tests, and that human interaction with the system, for instance, is good enough to ensure that the system works. In general, what do you see are the biggest misconceptions about unit testing and test coverage requirements?

Cédric Beust: First of all, I'm not convinced that many developers believe they should aim for hundred percent unit test coverage. And that's a good thing because such a goal is not only impossible to reach, but is also dangerous, as it makes you focus on the wrong things. While code coverage is a useful metric, it can also be deceptive. Often, your time will be better spent writing one more unit test to cover a certain piece of functionality than bumping your code coverage from ninety percent to ninety-one percent.

I recommend that developers pay attention to code coverage only after they have taken care of their tests. I find that coverage-obsessed developers lose track of the fact that their code is there first to serve their customers, and only then for the developer's own satisfaction.

As for misconceptions about unit testing, there are many. Let me start with the fact that many believe the term "unit test" is important. It's not. I regularly see books and articles defining in very precise terms what a unit test ought to, or ought not to, do. A frequently used definition, for instance, insists that a unit test not use the network, a database, a file, or any input or output.

If you are trying to test the functionality of a class that operates on a database, then by all means write a test that does that. Avoid using a mock if you can. Use the real database so that the test code is as close to the production code as possible. Purists usually retort that doing so takes too much time and violates unit testing principles. If so, I don't care.

First of all, testing local databases has become very fast, rendering premature optimization unnecessary. Use the real thing and see how fast it really goes before making optimization decisions. Second, while I am sensitive to the speed argument—if tests take too long to run, developers will be more reluctant to run them—this problem should be solved at the runtime level, not the design level. In other words: Write all the tests you can think of, and then at runtime decide which ones you need to execute.

That last issue is something I feel very strongly about. It is one of the main reasons I came up with the idea of test groups in TestNG. TestNG enforces a strict separation of your static and dynamic models. By static model, I mean the code of your tests, and by a dynamic model, the decision of what tests to run. TestNG separates those concerns by letting you specify what test methods belong to what test group. You specify that in an annotation as, for instance, @Test(groups = { "database", slow" }), or @Test(groups = { "web", "fast"}). [Editor's note: code examples in this interview were provided in an email exchange.]

You decide at runtime what groups to include in, or exclude from, a run. That's is a very powerful mechanism that users absolutely love. When you start using test groups, you realize that categorizing your tests into unit tests or functional tests becomes irrelevant.

Frank Sommers: Even the most test-infected developer admits that writing unit tests is a lot of work: Not only do tests have to be written, but test suites have to be maintained in the presence of code refactoring, or some other code changes that break existing tests. In what way does TestNG ease the tedium of writing and maintaining test suites?

Cédric Beust: Test groups and annotations help a great deal in that area because you no longer care about Java names that much. With JUnit, your classes have to extend a certain class, your test methods need to have a specific signature: they must start with test, return void, and have no parameters. In practice, developers also felt constrained to use certain package names, such as test.database to keep a seemingly orderly view of their tests.

Concepts such as test groups and annotations in TestNG free tests from naming and packaging conventions: You can use any Java name for your packages, classes and methods, and you use annotations for the testing metadata, which is exactly what annotations are for.

As for simplifying the writing of tests, TestNG formalized a lot of concepts that JUnit doesn't support directly but that developers need on a daily basis. For example, I often want to pass parameters to my test methods. Passing parameters to a method seems like a natural concept: I pass parameters to my Java methods all the time. Why can't I do the same with JUnit? Why do they require to have no parameters? In JUnit, I'm forced to use convoluted design patterns to simulate parameter passing, such as:

public class ATest { 
  private String name; 
  public ATest(String name) { = name; 
  public void testName() { 
    // test 

But that's not all: If you use this pattern, you also need to use a special way to invoke JUnit so that you, and not JUnit, instantiate that particular test class with the correct parameters. I was quite horrified when I realized how many times I was using this pattern in my JUnit code. Surely, a test framework should make my life easier by providing this support out of the box. In TestNG, you define your method with the parameters you need, and tell TestNG what other method will feed parameters to the test method:

@DataProvider(name = "name-provider") 
public Object[][] { 
  return new Object[] { 
    new Object[] { "Cédric Beust" } 
@Test(dataProvider = "name-provider") 
public void verifyName(String name) { ... }  

Data providers open the door to data-driven testing. Data-driven testing is useful when your test code doesn't change much, but the data you need to feed to that code does. There is tremendous flexibility in that concept. Consider a slightly different data provider method:

@DataProvider(name = "name-provider") 
public Object[][] { 
  return new Object[] { 
    new Object[] { "Cédric Beust" }, 
    new Object[] { "Frank" }, 
    new Object[] { "Anne" }, 

Your test method is invoked three times, each time with one of the strings specified above. Nothing stops you from getting this data from an XML file, or a database, or an Excel spreadsheet. Users have been extremely creative with this feature, some even using JMS to deliver parameters to their test methods.

Frank Sommers: TestNG pioneered the use of annotations in simplifying Java testing. Lately, JUnit 4 also added annotation support. To what extent does TestNG's annotation support currently differ from that of JUnit 4?

Cédric Beust: Not by much. We have a few common ones—@Test is an obvious example—but most of them are different. While the differences are unfortunate, they are simply a reflection of the different philosophies and scopes the two tools follow.

Frank Sommers: What are the simplest ways a JUnit user can start using TestNG? Do existing JUnit tests run under TestNG?

Cédric Beust: Yes, TestNG can run JUnit tests right out of the box. TestNG also makes it easy to migrate progressively from JUnit. And you can even run a mix of JUnit and TestNG tests. There are also a variety of tools that make it easy to convert your JUnit classes over to TestNG. For example, we have a tool that goes over your entire code base and converts all your JUnit test classes in one fell swoop. We also have plug-ins into IDEs, such as Eclipse, IntelliJ IDEA, and NetBeans.

As for starting to write TestNG tests, reading the documentation at is a good start. The Miscellanous section contains a lot of pointers to articles that are useful to get started.

Frank Sommers: Could you describe some of the reporting features unique to TestNG?

Cédric Beust: Because of the amount of data TestNG needs to report, each run generates a very extensive HTML report that gives you all the information you need to assess your test run. The data provided include the test methods that ran, in what order they ran, and how long each took. What test groups were executed, and which groups were excluded, are also reported, including the methods in each test group. And for each method, success or failure information is provided, as well as abbreviated or full stack traces and messages that were issued using TestNG's Reporter API. The reporting API is very flexible, and users have written quite a few reporters for their own personal use, such as for PDF, JUnitReport, or XML outputs.

Frank Sommers: What are some of the new features in TestNG 5.x?

Cédric Beust: TestNG 5 was mostly about paying off technical debt. TestNG has been around for three years now, and while I have tried to keep its development and vision as consistent as possible, requests by users have certainly taken a toll on its overall shape. I thought I would use this major release to do a lot of clean-up and renaming. TestNG is always completely backward compatible, and I take pride that no release has ever broken any past feature.

TestNG 5.0 was also what I'd call a "user happiness release." User happiness is all about little details, so we cleaned up and improved the HTML reports, fixed GUI details in the various IDE plug-ins, improved the Ant task, and so on.

Frank Sommers: What are some of the directions you see TestNG take in coming releases?

Cédric Beust: The core TestNG has been fairly stable for a while now, and while I do add major features to the core once in a while—such as the newly added support for sequential testing—a lot of the work has been focused on IDE tools. We also focus now on integration with existing JUnit add-ons, such as DBUnit, and integration with other popular Java frameworks, including Spring and Maven. Most of these features come from external contributors.

I make a point of only adding a feature when users start heavily requesting it, and when the proposed feature has been discussed at length on the mailing-list. So it is sometimes a bit difficult to describe what the future holds. At the moment, there are two directions that I'm contemplating for possible future inclusion in TestNG.

The first one is distributed testing. As testing becomes more prevalent, it is not uncommon to see organizations running thousands of tests. Some of these tests take a long time to run, and there is nothing you can do about it because they just happen to test code that takes a long time to execute. I predict this issue to become only more important in the future.

TestNG's parallel mode comes in handy in those situations, but on a larger scale, you need distributed testing. I started working on a prototype of Distributed TestNG, which lets you start TestNG agents on pools of machines that can then be asked to run test fragments and return the result to their callers.

Another area that's interesting to me now is to provide more flexibility in how developers use annotations in TestNG. I've seen more and more users asking how they could replace annotations on the fly to handle special cases, such as replacing @Test(sequential = true) at runtime with a different value. I was part of the JCP expert group that defined annotations in JDK 1.5, and at that point we decided not to address that problem. Lacking general support for this in the JDK, individual tools will need to supply this functionality. Again, these are just two of the ideas floating around now for inclusions in a future release.

Frank Sommers: Any closing thoughts?

Cédric Beust: Whether you use TestNG or JUnit, keep your mind open, and don't let the tool you use restrict your creativity or limit your desires for certain features. It's very easy to become locked in a mindset when a field is dominated by a single tool for such a long time. We just need to shake up this rigidity and demand that our tools match


TestNG home page:

Talk back!

Have an opinion? Readers have already posted 4 comments about this article. Why not add yours?

About the author

Frank Sommers is a Senior Editor with Artima Developer. He also serves as chief editor of the IEEE Technical Committee on Scalable Computing's newsletter, and is an elected member of the Jini Community's Technical Advisory Committee. Prior to joining Artima, Frank wrote the Jiniology and Web services columns for JavaWorld.