Summary
Today I gave a presentation about ScalaTest at a local company. I showed ways ScalaTest integrates with JUnit and makes it easier to do some things that are harder to do with JUnit in Java. It made me curious to find out what JUnit users would say are their actual pain points today with JUnit.
Advertisement
With the recent release of ScalaTest 1.0, I have started an effort to explain to people what ScalaTest is all about. A few years ago I had a series of blog posts asking people to describe their actual pain points for Java, Ruby, C#, and Python, and I realized that it might help me explain ScalaTest better if I could gain a better idea of what people actually find painful using JUnit today.
Please post your list of JUnit pain points in the discussion forum. To weed out minutia (JUnit pin pricks), try and limit yourself to your top three pain points, in reverse order. Your most painful point will be your number 1, second most painful number 2, and third most painful number 3. Lastly, for each pain point, please try and explain the real business cost of the problem. By the way I don't intend this to be criticism of JUnit, but more as a user feedback fest, which would also be useful to the people who bring you JUnit.
0. The execution model is obscure. How do tests get run? How do they get collected?
1. Setup overlaps with the native constructors. When should you use one, and when the other? Is this to get around finalization being non-deterministic?
2. Extracting literals makes the tests smaller but confusing, so I end up with test values obscured more than I would prefer.
3. It is hard to debug match failures between collections.
4. JUnit is inventing their own expression language which is less powerful that the native one so that failures can be explained (org.hamcrest.Matcher<T>). Some kind of macro which prints the expression and inputs would be better.
> > The worst one for me is not having a standard way to > > isolate unit and integration tests - especially under > > eclipse! > > Spring has something like that: > > http://static.springsource.org/spring/docs/2.5.6/reference/ > testing.html#integration-testing-common-annotations > > Although I have to admit, I use a naming convention and a > pattern instead.
It would be nice to have some way to mark some tests to be non-blocking, ie, not make the build stop in case of test fail. This would be useful in the case of known bugs, which could have tests to document them, but with no correction yet. Yet, every source control check-in should pass all other tests.
> It would be nice to have some way to mark some tests to be > non-blocking, ie, not make the build stop in case of test > fail. This would be useful in the case of known bugs, > which could have tests to document them, but with no > correction yet. Yet, every source control check-in should > pass all other tests. > I have something in ScalaTest that might address this. Let me explain it and I'd like to know if you think this would address this need if something similar were added to JUnit. Basically you can mark code as "pending until fixed." You surround some code that is causing a test to fail with a pendingUntilFixed block, and while that code throws an exception, the test will be reported as pending not failed. (When you run with JUnit, the pending status gets transformed into ignored, because JUnit doesn't have a notion of pending.) If the test gets fixed later, then that block of code will no longer throw an exception, and the pendingUntilFixed at that point will actually throw a TestFailedException. So now you do get a failed test, and you have to go in and remove the pendingUntilFixed block to get the test to pass. Code looks like:
pendingUntilFixed { // test code that temporarily fails val result = myAPI.getResult() assert(result === 17) }
This was inspired by a feature in Ruby's RSpec (and also specifically requested by ScalaTest users). Info is here:
For the most dogmatic use of "unit" testing, JUnit works fine. Truly valuable testing, however, often requires a fair amount of integration testing. Mocking out the behavior of systems that are outside the boundary of an integration test, though, can be a tedious and error-prone process.
What I'd love to see is a mocking framework that can perform record and playback. That is, you can mark the classes (or packages) that you want to record during a "live" run. Then, when you use those classes during an integration test, its methods will return the previously recorded values based on the input.
> 0. The execution model is obscure. How do tests get run? What exactly would you like to know about how tests are run?
> How do they get collected? This is up to the environment which drives JUnit. JUnit basically just takes a list of classes to be run.
> 1. Setup overlaps with the native constructors. When should you use one, and when the other? Is this to get around finalization being non-deterministic? Always use a setup method instead of a constructor. The former is more explicit, and better controllable by JUnit.
> 4. JUnit is inventing their own expression language which is less powerful that the native one so that failures can be explained (org.hamcrest.Matcher<T>). Some kind of macro which prints the expression and inputs would be better. I agree. "Assertions should be plain boolean expressions" is one of the central ideas behind Spock (http://spockframework.org), a Groovy-based testing framework targeting Java and Groovy applications. (Disclaimer: I'm the author of Spock.)
Just yesterday i saw that the Ruby testing framework Cucumber allows you to add tags to tests. When you run a test suite you can use the tags to filter the tests to be executed. That would be nice to have in junit.
My biggest JUnit pain point is that once I've invested into writing lots of unit tests, I can't do major refactorings of the codebase without throwing away all that investment. This is probably not a JUnit only pain point.
As I discover a problem domain more, I obviously want to improve my design, and that means throwing away some code, moving methods around, etc. That's all fine, except that all the tests for those methods and code now become either obsolete or require major rework.
I'm curious: How do others doing TDD deal with this?
To some degree isn't this a sign that you're testing implementation detail? Creating unit tests for every internal class and API seems like common, but bad practice.
Unit tests are for APIs that are relatively stable, "published" facades for meaningful units of code. These APIs are essentially asserted not to change frequently and possible not at all in certain ways, e.g. code that called the APIs in previous versions/releases may be asserted to continue functioning irrespective of possible refactorings -- in which case the unit tests help ensure this as well.
If you have really tricky implementation details that warrant a unit test, then it is certainly fine to write an implementation unit test for such a case. Just be cognizant of the fact that you are testing an implementation detail and any substantive change in the implementation (e.g. refactoring) will completely invalidate such a test.
My two biggest gripes with JUnit are: 1) If an exception occurs in a @Test method, and then one occurs in an @After method, only the @After exception is reported. Were it not for this, @After would be a fine place to place things like verifyMocks calls, but as it stands, this tends to obscure the original source of failure. 2) @BeforeClass methods need to be static, meaning that they cannot use the Template method pattern. (TestNG does not have this problem).
> > As I discover a problem domain more, I obviously want to > improve my design, and that means throwing away some code, > moving methods around, etc. That's all fine, except that > all the tests for those methods and code now become either > obsolete or require major rework. > > I'm curious: How do others doing TDD deal with this?
That's one of the reasons I gave up on TDD and unit testing in general. I value automatic tests more than ever - just don't write them at such low level; now they cover functionality, not implementation details.
Flat View: This topic has 65 replies
on 5 pages
[
12345
|
»
]