> Wouldn't it have been nice if the code had been written in > a clean and expressive way so that it told you how it > worked? Wouldn't it have been nice if the code had been > instrumented with unit tests that helped you understand > it? Don't you think it is our professional duty to write > code that eliminates the need for others to have to use > debuggers to understand it?
You sound like the programmer who writes code, and I happen to be the guy who has to debug your code 5 years later. Am I supposed to "wish" you had written better code? Nopes, I have a job to do, so I get on to just the right tool, the debugger, because someone out there has a business (or a hospital, or a bank, ...) that depends on the buggy software you wrote 5 years back.
Also, software outlives "good coding pratices". What was good a few years back is considered bad taste now.
> > I'd really like to hear how a debugger has ever > > wasted your time. Honestly, just how? > > Simple. By trying to breakpoint my way into the failing > context. I've spent hours trying to get to just the right > point in the execution thread. I've spent hours in the > wrong place, convinced it was the right place. I've spent > hours backtracking from a context, not quite sure exactly > how I got there. Only to find that the bug was stupdily > obvious and I could have easily found it simply by reading > the code that I was breakpointing through.
> Note, I am not saying that there is not a place for > debuggers. I just think that they should be considered a > tool of last resort.
What you really mean is, "don't jump on to the debugger too early, because I've myself wasted a lot of time doing that." Yes, one needs to know how and--more importantly--when to use the debugger.
I concur. I usually start with the activity of setting breakpoints and stepping through the code only when I'm sure of which function and which particular loop I need to examine. And yes, *before* beginning to do that, I always take a dry look at the code to pick up some clues.
I have to admit I agree with the original post and am somewhat bemused by the replies that assume the debugger police are going to round them up and confiscate their favourite tools. Perhaps a deep breath and a reread is in order.
After all, if someone states that a debugger is a powerful tool of last resort, then replying that you have to use it because your work is too complex and/or broken (for reasons outwith your control) for other methods to work is *agreement*.
I didn't see this blog so much as a recommendation, but more as a comment on a social change that has already happened. It just so happens that people sometimes need a nudge to realise their working methods (i.e. habits) are out-of-date.
Note to touchy (de)buggers: if you're thinking and reflecting on your practice of software engineering and have decided you still need/want a debugger, then good for you. If, on the other hand, you're using one because that's what you've always done (and especially if you're passing this knowledge onto the next generation) then maybe you need to re-assses.
Pray, I fail to understand how writing unit tests can be faster than stepping through the code, given the fact that I don't understand the file format (thus making it difficult, if not impossible, for me to write create unit test cases in the first place!).
given the fact that you don't understand the file format, shouldn't you try and understand it before fixing it, perhaps by writing a few unit tests?
I can vouch for the fact that in the tdd projects I have witnessed, unit testing has eliminated a large chuck of debugger usage.
It is plainly still useful, as an additional tool, but is used in moderation.
An interesting theme is developing in posts that insist that debuggers are vital: C/C++. Consider the problems as described:
> The conversion routine resulted in the internal representation of a floating point value represented with a trailing zero after the decimal point being slightly different (by one bit on a 64-bit machine) from the apparently same value represented without that trailing zero.
> So says the person who's never seen a printf statement magically fix some broken code. This is not unheard of (it happens because printf has a return value which normally ends up fixing some broken value on the stack which has resulted from improper initialization).
> When I was a C++ programmer I was addicted to debuggers, but in the last two years of quite only Java development I have found myself using a debugger only a couple of times.
So it can certainly be said that the debugger is useful for stepping through code in languages where things like pointers, the stack, and numeric representation are all sources of problems. Problems of these sort are in addition to the problem of whether or not the code, as written, does what it intends.
> I can set and hit a breakpoint faster than I can type the print statements to perform printf debugging, let alone recompile and hit the print statements.
In this case, the implication is that recompiling and re-running is a time consuming task. Certainly with C/C++ this is often true, particulary if the code is poorly factored and compile-time dependencies are large. With other languages and tools, compile time is either zero (SmallTalk) or negligible (Eclipse).
In every scenario I've seen printf debugging was less efficient than firing up a debugger.
Imagine a server-type program (a Mud gamedriver in fact) which crashes once a month after several hundred users used it. It crashes always with a memory corruption at the same place, after having executed the same piece of code without a problem millions of times. Obviously the actual corruption happens someplace elsewhere in the code, at some unspecified time before the crash.
I fail to see how a debugger or unit tests would be helpful in this kind of situation.
In the end the problem turned out to be an off-by-one error in a loop which was triggered only with certain rare data constellations in memory, and which didn't change the direct logical outcome of the loop itself. I found it using a combination of printf statements and assertions, over weeks narrowing down the suspect portion of the code down to a couple dozen lines, at which point intuition came to my aid.
Bottom line: there is no single right tool - you have to use what is best for a given situation.
The important things have already been said: debuggers can certainly be valuable when working with other people's code (I find myself single stepping through MFC from time to time) and when working with low-level languages where the bug may not be visible at the source level.
I'd like to add another context: developing device drivers or working with new hardware. In these cases debuggers are pretty much needed from day one. How are you going to deal with buggy hardware? How are you going to deal with a device driver that spontaneously reboots your system when it hits an unforseen event? Bob, would you say that kernel debugger and SoftICE are wasteful timesinks?
Robert, I would be more inclined to agree with you if you had developed your arguments at least a bit further.
Here's what I have issue with: "debuggers have done more to damage software development than help it"
For me this statement is false. While I don't claim that any debugger has provided any *unique* functionality that has helped me, setting a breakpoint in order to examine code/program state at a particular point is faster, by far, than writing the appropriate output statements. I would imagine this to be true, at least some of the time, even if I coded with finely granular unit tests.
"The kinds of bugs I have to troubleshoot are easily isolated by my unit tests..."
IMHO, you should describe what kind of bugs these are and provide examples of them, and of the other cases (where unit tests don't isolate them).
"... and can be quickly found through inspection and a few judiciously placed print statements. "
This also leaves the reader and easy 'out' from the central point - what if they have bugs that aren't found by quick inspection and print statements?
Ultimately I would agree partially with you in the sense that, of all the changes/bugs in my code I only use a debugger in about 5-50% of the cases. However categorically labelling them a timesink does not provide any useful guidance for the developer, IMO.
Imagine a server-type program (a Mud gamedriver in fact) which crashes once a month after several hundred users used it. ... Obviously the actual corruption happens someplace elsewhere in the code, at some unspecified time before the crash. I fail to see how a debugger or unit tests would be helpful in this kind of situation.
Just as a data point, a debugger helped me find a problem just like that in a high-volume web site (members.aol.com), where we'd get a web server crash every day or two across a farm of several machines. These things were handling hundreds of millions of requests a day. I arranged with the support crew to leave a debugger camped on one of the machines, and got notified when the server stopped responding. About 10 minutes later by looking at some of the server data structures, I had found a really obscure HTTP header parsing problem, which would have been the last place I would have looked at with inspections since that code had "been around and working for years".
On the other hand, the debugger was totally worthless with another crash a different team was having. We found that one by inspection (which turned out to be in that team's custom module. Ironically enough, in a debugging print statement that clobbered the stack if it got fed too much data)
"In fact, if we could somehow wipe out all debuggers everywhere, a whole lot of bad code would shrivel up and die and we'd all be better off."
Assuming you weren't being facetious for a second, how do you (or does Linus) draw a direct correlation between debuggers and bad code? Debuggers do not write, modify or even suggest code. They show you internal program states that (in theory) allow you diagnose bad code you or someone else wrote. They are not the only alternative for doing this, of course, but they do not influence your code beyond giving you more information about data or program states.
I think the implication is that because debuggers exist, you'll write sloppy code with the plan of gleefully dredging through it with the debugger. Or more charitably, that you just won't be as careful, because you know you can always debug it later.
I don't use debuggers as much as I used to, but I think that might illustrate another thing about them: they are good learning tools, especially in lower level languages like C or C++. In these languages, a beginner can even have a hard time getting printf() to work reliably (I remember discovering just how important the difference between "%d" and "%ld" was!) and as mentioned above, it can be alternately a cause and a mask for bugs in the code.
I do like using traces and logging, with runtime, not compile-time enabling of both (many C/C++ programmers are probably shuddering at the thought of all those lost microseconds of CPU time during a few hours execution, but I think the flexibility justifies it).
Some responses here sound like this think about debuggers were such a revelation.
Debuggers, as any other tool, have their context for effective use. In my design and programming practice, I make little use of debuggers though. I share the reason behind this with what Robert Martin just said here.
Mainstream debuggers are a last-resource tool to answer the question "why my program is not running?" or "why my program behaves this way?" Unfortunately, answers to those questions from the debugger perspective have a very limited scope of help. These kinds of debuggers are unable debug the design and structure of the software.
Perhaps, taking debuggers as the primary source for those kind of answers is what make them time wasting
There's another Linus Torvalds quote in the same vein:
We didn't have to replicate the problem. We understood it. (Linus Torvalds)
Of course, he's being facetious. When isn't he? But I assume that what he's trying to say is that if you have to use a debugger, you don't understand the problem.
That brings us back to Bob Martin's orginal point, which is that fixing problems without understanding the root cause can result in sloppy patches. If you use debuggers as a knee-jerk reaction to anything that doesn't behave the way you expected, you may end up writing code that has a poor architecture or is simply a workaround.
There's another quote that's relevant:
If you cannot grok the overall structure of a program while taking a shower, you are not ready to code it. (Richard Pattis)
For those of you interested in scientific applications:
Improving the performance and quality of code is not usually a trivial task. Robust, easy to learn tools like TotalView can frequently make code development and analysis much more productive. Although print statements are a highly portable and possibly effective way to develop code, I have found the time well-spent in familiarizing myself with the features of capable debuggers.