Parasoft released JTest 8, the latest version of its comprehensive Java testing tool. JTest product manager Nada daVeiga spoke to Artima about a combination of testing techniques that lead to the best code quality, why code coverage is a nuanced notion, and how tools help keep a test suite up-to-date in the face of constant change.
Parasoft this week announced the release of JTest 8, the latest generation of its comprehensive Java test suite. JTest product manager Nada daVeiga spoke with Artima about the philosophy behind JTest, and what benefits JTest provides beyond traditional unit testing tools:
We've traditionally advocated automated error prevention: Rather than develop your code and test defects out of that code, we recommend that you prevent defects from entering your code in the first place, as you're developing that code.
We believe there is no one way of testing, no single silver bullet. You have to have different strategies for finding different kinds of defects. For example, certain types of defects are most suitably addressed with static analysis: You can statically analyze your application's code paths and detect errors in a complex sequence of code, such as not properly initializing a variable, or assigning a variable the wrong value. Resource leaks are another example where static analysis helps.
If you want to address the functional behavior of your code, unit testing is your best bet. JUnit may not always be able to provide the best solution for that, though, and you may also need to use [Apache] Cactus to test your servlets in a container.
Then there are issues with design, or missing requirements, or misunderstood requirements. Static analysis and unit testing won't do in those cases. Those are issues that only visual code review can catch. JTest is a comprehensive tool that provides all those features.
You also have to keep in mind that people constantly modify code. It's very useful to have an automated testing process in place so you can stay on top of that constant change. Our customers tell us that the biggest value they get from JTest is the ability to build a comprehensive regression testing suite, and capture the functional behavior of their code. That makes them comfortable with change, knowing that if they changed something, that change didn't break something else.
A debate over desirable test coverage flares up from time to time on Artima and other developer sites. While almost everyone aims for highly tested code, most developers—especially those in a business setting—are also painfully aware of the high cost of building and maintaining a comprehensive test suite. DaVeiga told us that her customers' view of test coverage is often more nuanced, and that tools can help automate the maintenance of test suites:
Test coverage is very important, but it's not everything. You want to have comprehensive coverage, but it's best if that coverage comes from a combination of sources. In addition, you want to look at your code base and decide which modules are really important. For those, you want to understand the complexity, and that code needs to be tested really well.
Test coverage various from industry to industry. We have a few customers in the medical instruments field and, in their case, test coverage needs to be very high, and all the known defects must be addressed. Coverage in the financial industry is pretty high, too, mainly because they have to comply with regulations and various quality initiatives mandated by the government.
Test coverage can also be increased with the appropriate tools... JUnit is just a testing framework, and you still have to write your unit tests. Writing tests by hand is very time-consuming. Some of our customers estimate that, on average, it takes about three times longer to write tests [than to write the code]. JTest will automatically generate JUnit or Cactus tests for you by analyzing your code, and it will do so with as much coverage as possible.
That capability is based on JTest Tracer, which is an add-on, and is based on what we previously called Sniffer. Tracer monitors a running Java application as a user interacts with that application, and captures functional JUnit test cases. [For example,] if you have a banking application, and the user logs into that Web site and deposits some money, and then pays some bills at the same time, Tracer will monitor those actions, and produce corresponding JUnit test cases for testing that functionality.
DaVeiga pointed to four new JTest 8 features:
Bug Detective is the static analysis engine in JTest 8. It looks at source code, and analyzes things such as the execution paths of your code, identifying problems in the process, such as resource leaks, potential NullPointerExceptions, or un-initialized variables. In addition, we provide about 700 best-practices coding rules for Java EE 5, EJBs, servlets, Hibernate, JDBC, as well as regular object-oriented and metrics best practices.
While Bug Detective relies on source code, Tracer uses byte code to generate JUnit tests. Tracer is an enhanced version of our previous Sniffer product. A new feature in JTest 8 is the ability to generate Cactus tests for Java EE applications, allowing you to test Web applications running inside a container.
Finally, we also integrate a code review feature in JTest 8. We leverage our integration with the source control system to know who changed the code, and what the changes were. JTest's code review feature can map reviewers to developers, and provides an additional view that lets you review code, without having to set up a physical meeting. That is especially useful for distributed development teams, and allows you to have people review code who know a lot about the required functionality. That makes reviews more productive.
To what extent do you combine different testing methods, such as unit testing, static analysis, or code reviews, to get a comprehensive picture of your code base's quality? How comfortable are you with the quality metrics you are able to obtain about your code?
> > combination of testing techniques that lead to the best code quality > Just to make one thing clear. Testing does not & can not improve the quality of the code which I take it to mean how well the components of your system and their interactions are designed/structured.
When used effectively they do definitely help you reduce the number faults in your system but they can not help you improve the overall design/structure of your system.
In theory, with the help of effective testing, you can create a fault free system out of a very badly designed system that is a real maintenance nightmare that no one would dare to touch it.
Now, we can not call this system a good quality system, can we?
> > > > combination of testing techniques that lead to the best > code quality > > > Just to make one thing clear. Testing does not & can not > improve the quality of the code which I take it to mean > how well the components of your system and their > interactions are designed/structured.
In a narrow view, I agree with you.
In a slightly wider view, having the developers add at least some kinds of tests to a codebase will in my experience usually increase code quality.
Unit tests will tend to increase code quality in two ways: First, they'll add one more, very intimate consumer to the code under test. This will usually require better factoring of the code than there was before adding the test. Initially writing with tests is even better for this.
Second, they'll provide more confidence to change the code. This makes it more likely that people will bother to clean up code, as the costs are less (very little risk of introducing bugs).
Reviews will tend to increase code quality by making the code more readable. The reviewer will usually want to be able to read the code / changes, and will give feedback if he or she find it hard. Reviewers will also often give tips about code style, how to write things more readable, and so on. Even just the knowledge of a review coming on the piece of code will often make the programmer write better/cleaner code than he would do if he could keep it in the closet.
I have also used system scale tests to support refactoring - so those tests help for better code quality too, but that's much more of an indirect help.