Recently a client asked for a rundown of TDD and unit testing. This client was calling anything that tested any code a "unit test", leading to a bit of confusion.
To be fair, the history of the term unit test does imply that definition. Recall the waterfall model: code is designed, then implemented, then "unit tested". In that model, pretty much any developed code is a "unit", thus unit testing is testing any code.
TDD shifts this definition subtly, but importantly. The definition I proposed for the term unit test was the tightest definition: a unit is the smallest bit of code that can be tested, thus a test is only a unit test if it tests the smallest bit of code that can be tested (usually a class via its public methods; this was a Java project). I then proceeded to categorize other tests (integration, acceptance, performance, etc) to emphasize that 21st-century development efforts require many layers of tests.
Certainly the original definition of unit test is not incorrect. However, more modern views of development get better traction out of a more specific definition, one version of which - and a very concise one at that - is Michael's list of criteria.
Flat View: This topic has 50 replies
on 51 pages
[
«
|
161718192021222324
|
»
]