I code for days sometimes without being able to compile. Complex ideas sometimes take days to build even if they are componentized. When I finish I step through every line of code. Wow! every line of code? That must be a horrible time waste! Actually what I find is that within a minute or so I find my first bug. Then I fix it which may require 5 minutes or it may require an hour. Then I find my next bug within about a minute. Then it may require 5 minutes to fix or it may require an hour. So in the end I'm spending probably 95% of my time fixing bugs, not stepping through the debugger. And where am I stepping through the debugger? In my unit test.
I then pair program with another programmer. I let the other programmer do it their way because they're not going to listen to me if I try and guide them anyway. Then after they have wasted 3/4 of an hour (yes I have actually kept simple metrics on this stuff) I politely point out that the bug could have been found in the debugger in 30 seconds. The light bulb goes on. They won't acknowledge due to the typical programmer ego but they take that information with them and use it for the rest of their days.
Debugging does not consume time, fixing bugs consumes time. The debugger should always be used on a first step through. You simply need to keep reminding yourself that you are spending 95% of your time fixing bugs, not stepping. Writing code is no more than an educated guess. Some are better at a first guess than others but we all guess wrong on our first attempt some very large portion of the time. And if the piece of code we are writing is large enough, 100% of us guess wrong on a first try. Being in the debugger, in contrast to what some may think allows you to use your mind. It allows you to see the bugs that your current tests are not covering. It allows you to think about what might happen if you go through that else-if given your current knowledge of the state of the internals of the program. This is something that just running unit tests can never give you. And the tests are not mutually exclusive, seeing that else-if may allow you to write a new test.
Rely on your tests after your first big step through. This will not impede you, it will decrease your development time. Also do a step through when you make comprehensive changes to some section of your code. And take this with a grain of salt. There are cases where the debugger is simply the wrong tool for the job.
As a last note. I have attempted writing code for long stretches without using the debugger. Have you tried the opposite of what you are doing for any reasonable length of time? When you do, keep track of the amount of time you are stepping and the amount you are fixing bugs. You'll be amazed at how little of your time is actually consumed in the actual act of stepping. Just because we grit out teeth at having to enter the debugger, and I do, does not mean that it won't make us more productive.
> given the fact that you don't understand the file format, > shouldn't you try and understand it before fixing > it perhaps by writing a few unit tests?
This is a very good point. I've seen lots of new bugs caused by people trying to fix bugs in code that they don't understand. In the case of AFM files, there's really no excuse -- the documentation is easily accessible (just google "afm file format"; Adobe's spec is the first hit) and the format is not very complex.
>| A little *too* clever sometimes. > A print statement does not (usually) interrupt > the flow of execution. A breakpoint does.
That's why the best way to do this with a debugger is to install a watchpoint at a location and do a println there. It won't stop the code but it will print your debug code.
No interruption, no spurious println in your code and the exact effect you were waiting for.
> Have you ever blown an hour or two debugging the right code > in the wrong context because you didn't realize that the > breakpoint would get hit more than once, and so you used the > first hit instead of the 17th?
Sure, the very first time I used a debugger. About 20 years ago :-)
Besides, good luck finding the 17th println in your console.
At least, I can tell a debugger to stop the 17th time the breakpoint is hit. With your method, I need to discard manually and visually the first 16 println.
I have also done scientific computing. At one time, I found debuggers extremely useful because my algorithms were overly complex and hard to test. As I gained experience, I learned to use better factorization of my code, unit testing, and logging to get the quality and performance I needed. I still use a debugger once in a while, but only when I'm in a pinch. Debuggers haven't been a very time-efficient tool in my multithreaded, multiprocessor environment. It usually takes less time to go for a walk, think about what I'm trying to accomplish, and then write a better function that doesn't need to be debugged.
Personally I consider being forced to use a debugger speaks lowly about your development skills. It should only be used as the last resort. Personally I prefer doing "conceptual debugging" where I go over the steps mentally, examine them, and try to think where it go wrong. It is much faster than debugger based debugging, not to mention the intellectual satisfaction you get from the exercise :)
And yes "test before you code" is the prescription for masses for avoiding debugging. It makes coding wonderfully simple.
> Personally I consider being forced to use a debugger > speaks lowly about your development skills. It should only > be used as the last resort. Personally I prefer doing > "conceptual debugging" where I go over the steps mentally, > examine them, and try to think where it go wrong. It is > much faster than debugger based debugging, not to mention > the intellectual satisfaction you get from the exercise > :) > > And yes "test before you code" is the prescription for > masses for avoiding debugging. It makes coding wonderfully > simple.
Well heck, cranking out unit test code seems equally mundane. Shouldn't Gedanken Experiments make them superfluous as well?
Well, Bob: I've seen this myth. But rather than address whether or not debuggers are worth their salt (I happen to think they are), I'd like to make a point re. test-driven development.
It doesn't work for complex data driven apps, where applications configure themselves. The real reason for this is one of combinatorial complexity -- that the number of test cases one would have to write for even a medium-level functionality app. would be prohibitively high.
The other kind of app. (one that we tend to write) involves mathematical optimiation tools -- programs that can run for hours. We do write test cases, but they are carefully constructed to expose the edge cases. We don't and indeed can't write the test cases before doing development.. in fact it's often the other way around.
And as I have explored a lot of the domains I tend to work on, I've come to hold the view that test-driven development applies only in very special cases.
So -- I think this hype about test-driven development w/out carefully understanding what kinds of programs it is truly useful for -- will pass.
Well, Sundar: I've seen this myth about test-driven development (TDD).
I suspect business-oriented applications make up by far the largest percent of software out there; I've yet to see TDD not apply to any of these apps. So, perhaps it applies only in "very special cases" based on your very limited worldview.
Also, I think you're missing the point if you think the goal of TDD is to exhaustively test every potential data variation. Learn to use that OO tool a bit better and understand how to get it to manage complexity. It sounds like right now you're pretty much giving up and suggesting that it's impossible to test your app.
I somewhat agree with your points Robert, but I think the issue is that you have really overstated the position. I think the article would be much better titled as "Debuggers can be a wasteful Timesink" To say that they *are* a wasteful timesink implies that there is never a case where they are not a wasteful timesink and that simply is not the case.
I certainly agree that unit testing really decreases the amount of debugging you need to do, but it does not eliminate it entirely. In fact, I would say that because unit testing eliminates most of the easy problems, you are left with the more difficult problems which may be easier to find with a debugger. Therefore unit testing may actually increase your use of a debugger as a percentage of your debugging.
There are many classes of problems where unit testing and logging are not going to find your problems. I work in embedded development and you really need a debugger to debug many of the problems here (and sometimes that isn't enough). For example, we have been working like crazy trying to figure out why are system crashed occassionally. Turns out that the problem was the timing of the interface to the DDR RAM was dangerously close to failing and once in a while it would read the instruction slightly incorrectly. Try debugging that with unit tests and print statements.
I agree with the sentiment that one should not use an electron microscope when a magnifying glass will do. But that doesn't mean that we should get rid of electron microscopes. They are a necessity for some things, just not everything.
I could have solved the problem just as easily with a unit test. I have solved very similar problems with unit tests.
Like a problem we had with Java's Double.MIN_VALUE. One might think that Double.MIN_VALUE is like Long.MIN_VALUE and represents to largest negative number that Double can hold. But it doesn't. It represents the number closest to 0 without being 0 that Double can hold. I found and solved a bug that resulted from this misunderstanding in a half hour or so using unit tests and print statements after several other staff members had spent days using a debugger.
I see that my last reply did not include the context of the reply I was replying to. The context was Susan's example of a 64 bit number coming back with an extra zero.
Anyhow, Bob, I'm very glad you wrote this, and I see it is having the same response and affect that the topic has when it pops up on TDD or XP mailing lists.
I guess our way of writing software requires too great of an epiphany for everyone to make the leap. I am a little prejudice since I never liked debuggers before TDD. I just didn't know why. Debuggers never seemed to offer the return on investment of getting really good at them, so I found other ways to solve problems in my code. With TDD, now I can not only solve complex problems in my code, but leave self documenting artifacts that others can use to understand my code when I've moved on to bigger and better things.