Registered: Dec, 2002
Re: Testivus - Testing For The Rest Of Us
Posted: Feb 7, 2007 7:08 PM
> But since I haven't had anyone come to the defense
> of debugging - until now - I hadn't given much thought to
Well I think we all got sick or arguing the point against Robert Martin.
> I don't like this approach because time invested in
> testing results in reusable assets, while time spent
> on the debugger in this way is typically spent chasing
> a particular problem and leaves no reusable artifacts.
No matter which method/tool you choose, you're still tracking down a bug and then you try to fix it. Writing tests doesn't fix bugs. It is one way to track down bugs, but it doesn't fix them. A debugger is another way to track down bugs. Sometimes it's a faster way. Sometimes it's slower.
I think we need to stop calling the tool a "debugger", and start calling it a "runtime analyser".
If someone performs testing on a piece of software and finds something that doesn't work, and then you go and try to make it work, that's debugging.
It doesn't matter whether you're using a testing tool, a "runtime analyser", printf statements, or some other tools, it's still debugging.
If doesn't matter whether the bug was found by an automated unit test, a fucntional test, a regression test, or exploratory tests. It doesn't matter whether you found it straight away after writing the code, or someone else found it 3 years later. The process of removing a defect is called debugging.
If I have a failing test, I quite often fire up the "runtime analyser" to help me work out why the test and the code don't match. When I know that, I can fix whichever one is wrong. Sometimes I will change my tests to catch this scenario a little more explicitly so that next time around (if the bug returns) the root cause will be more obvious and can be fixed more quickly.
The tool isn't the problem.
It doesn't matter [*] what tool people use to work out why the bug (i.e. failing test) is there. The question is what they do once they find it. If they simply fix the bug and then ship it, then they're setting themselves up to have the bug come back again. So they should make sure they have a test to cover that scenario.
[*] Well, some ways are more efficient than others, so it probably does matter to the person paying the bills, but it shouldn't matter in this argument.
I would make this one of the guiding principles of testing. Aim to fix each bug once only.
* Use whatever tool you want to find the bug.
* Use whatever tool you want to fix the bug.
* Use whatever tool you want to keep the bug from coming back.
I agree with Cedric that the suggestion that the debugger is redundant once you start doing automated testing (or TDD, or BDD, or whatever) is a harmful one. It is a useful tool that been specifically designed to allow you to observe the behaviour of a piece of software as it executes.
I think one of the greatest dangers is in people sticking to one tool exclusively (be that their testing library, their debugger, or their logging package) when there are a great number of tools available that can deliver value in different ways.