This post originated from an RSS feed registered with .NET Buzz
by Darrell Norton.
Original Post: Where does unit testing fail?
Feed Title: Darrell Norton's Blog
Feed URL: /error.htm?aspxerrorpath=/blogs/darrell.norton/Rss.aspx
Feed Description: Agile Software Development: Scrum, XP, et al with .NET
(Mike Gunderloy) Q: Now that we've got some reasonable experience with unit testing, where do you think it fails?
(Mike Clark) A: I think unit testing fails when we use it as a substitute for acceptance testing. That is, unit tests that pass don't tell us that we built the right thing. They only tell us that what we built works as we, the programmers, expect. So as much as I'm encouraged by the enthusiasm for test-driven development and unit testing, I think we have to start putting an equal amount of emphasis on acceptance testing.
I also see unit testing failing in cases where it's a checkbox item on a project schedule. Just saying that we're doing unit testing says nothing about the quality of the tests. So I like to see a test fail once in a while as it tells me that the test is actually testing something that could break. I also like to open the tests up to review by users of my code and folks with more testing experience to see where I might improve the quality of the tests.
Finally, a common question I hear revolves around what to do about legacy code on a project that doesn't have unit tests. It frightens me to hear that some managers want to get to a certain percentage of test coverage for all legacy code. Unit testing legacy code can be successful if done tactically around sections of code undergoing change or containing bugs. But when unit testing is taken as an all-or-nothing approach, it's neither practical nor economical.