|
Re: Fine-grained low-level tests considered harmful
|
Posted: Apr 28, 2006 12:32 PM
|
|
piglet writes:
> The rule "Write your tests before you write your code", if > taken literally, is bullshit (strictly speaking, it > doesn't even make sense - after all, tests *are* code).
Yes, if taken literally, but you'd only do that if you suffered from severe Asperger's syndrome.
Anyway, tests, in its TDD meaning, are more like calibrations.
> You can't write tests (especially if you are talking of > fine-grained, low-level tests) unless you already know > your functional code structure in detail.
No. That assumes that testing is your aim rather than design. The regression suite you end up with at the end is simply a pleasant side-effect of the process. Sure, it's good to go in with a clear idea of the domain you're coding to, but "know[ing] your functional code structure in detail"? That's not a necessity. It's all about design. If you misstep, refactor.
> IOW the interfaces must be known before they are coded.
No. Only the part under test needs to be known. It's good to have a rough idea of what your interfaces will be, but until you make them public, they're far from fixed and are open to change.
The tests, are in part, an attempt to codify the interface.
> But > getting the interfaces right is hard (often it is *the > hard part*) and usually needs several cycles of trial and > error. Unit tests are great to verify implementations but > they don't help to get the interfaces right.
Again, you're mistaken if you're talking about TDD, which is a design activity. However, with post-hoc unit testing, you're correct.
Part of the point of TDD is to arrive at a point where you have easily testable interfaces, and easily testable interfaces are easily usable ("right") by definition.
> I honestly > don't understand how one can seriously work that way.
Different strokes for different folk.
> And > I do agree with Frank that programming is a creative > process that simply isn,t compatible with that sort of > rigid rules.
They're not that rigid! There's nobody out there saying that it's the one true way, it's just a practice that works well for many people in many contexts. If your context isn't suited to it, fine.
> My tests are usually higher-level tests that test a > recognizable chunk of functionality (usually at least on > class level, not on method level) and that are themselves > short.
Ditto.
> I don't believe that it makes sense to produce test > code of the same or even greater order of magnitude than > the functional code to be tested, for the well-known > reasons: > > - Test code needs to be debugged and maintained just like > any other code.
It also tends to be more trivial, meaning it has a much lower maintenance overhead than regular code. If your test are complex, there's probably something wrong with your interfaces: it's a code smell.
> - Fine grained tests are really testing implementation > details, which is unnecessary and hinders refactoring.
Then, by that token, whenever your code is used elsewhere in the codebase, refactoring is hindered.
But that's not true, is it.
And the same goes for tests. I rarely have to do any special refactoring for tests, and less than I have to do for regular code as my IDE takes care of most of the work automatically. And whenever I have to do more invasive refactoring by hand, the tests help me ensure that I don't get any nasty regressions, which saves me time I'd otherwise spend debugging.
It's a kind of proactive laziness.
> - Test code is client code. If you need lots of client > code just to *use* your functional classes, maybe your > interfaces are not well-designed. If your functional > classes expose lots of methods that each need to be > tested, maybe there are redundancies.
...and identifying that kind of thing is part of the point. I repeat, TDD is a design method, not a testing method.
> I try to write the functional code in a way that already > anticipates the testing. As an example, you often have > operations that are invertible, e.g. serializing and > deserializing objects from xml. The natural test then > would be to make the operation, invert it (by calling the > functional code) and test for equality. This test is > extremely short and doesn't need to be modified each time > some implementation detail changes.
Yup, but you'd need tests that check the various edge cases (such as what happens when you, say serialise an array with no elements, one element, or many elements, or if you attempt to serialise null somewhere.
For me, a test is a test, no matter whether it's calibrating something high-level or low-level. If a piece of code is trivial, testing it's not quite as important, but if it does something complex, no matter how high or low-level it is, I want to make sure it works.
|
|