Summary
Some developers are easily test-infected - they take to unit testing like a duck to water. Others need some time and encouragement, but eventually "get it". A third group appears to have immunity to test infection. I invent a test-gene model to categorize these groups and look at its implications for the future of developer/unit testing.
Advertisement
Developer testing is the simple, but logical, idea that developers should take responsibility for playing an active role in testing their own code. We are nearing the 10th anniversary of JUnit - the unit testing framework that kick-started the developer testing revolution. But, despite its obvious benefits and broad agreement that "it's the right thing to do", developer testing is still only practiced by a minority of developers in a minority of organizations.
I gave talks on JUnit and the developer testing revolution at two
JavaOne conferences (2004 and 2006) where I talked about the three
possible outcomes for the future of developer/unit testing:
Long-Term Defeat: The idea of developer testing was a noble, notable,
but ultimately short-lived blip in the history of software development.
Almost all developers went back to the old way of thinking: testing
is for QA.
Minor Victory: Developer testing does not disappear, but it remains a
niche activity. Practiced only by a devout minority of test-infected
developers.
Major Victory: Developer testing becomes a standard software development
practice. The rule, instead of the exception. Releasing software without
unit/developer testing is as unthinkable as releasing software without
QA.
I am rooting, of course, for the major victory, but I am starting to
have my doubts, and wondering why progress is so slow. Below is my
analysis of what I believe is happening and I'd be very interested in
your opinion.
In the past few years, I've noticed that some developers are easily
"test-infected", others require longer exposure and some encouragement
but eventually "get it" , and one group seems to have a built-in
immunity.
It's as if there are 3 basic types of test genes:
T1 - Very susceptible to test infection. Show them a single JUnit
example and they get it immediately - and start using it regularly and
with great fervor. When time pressure hits, they fight hard for enough
time to test and would rather quit than produce code without tests.
They often have framed pictures of Kent Beck on their desk :-).
T2 - Somewhat susceptible to test infection. Show them how to use
JUnit/TDD, give them some encouragement and time to appreciate the
benefits and rewards; eventually they will become regular users. When
time pressure hits, however, they often revert to no-test mode without
too much kicking and screaming.
T3 - Immune to test infection. There's no amount of showing and
convincing that will get them to practice developer testing on their own
with any regularity - if at all. Many of them would rather quit than
have to write tests on a regular basis.
I would love to be able to say that these genes are somewhat evenly
distributed. A rough Gaussian distribution. Something like this:
T1: 30%
T2: 40%
T3: 30%
But my experience is that developers with the T3 gene outnumber the
combined number of those with T2 and T1 gene.
This spells trouble because I have also found that when developers
become managers they bring with them their attitudes toward developer
testing. And a T3 manager is not going to provide an environment where
the T2 (the ones that require some time and encouragement) can become
test infected. And this might explain why developer testing is still
practiced by a minority of developers in a minority of organizations.
What do you think? Am I too pessimistic? Is predisposition to testing more a matter or nature or nurture?
I'd put the Major Victory as "Developer testing becomes a commonly accepted part of Quality Assurance. The rule, instead of the exception. Releasing software without unit/developer testing is as unthinkable as releasing software without doing any Quality Control."
I'm not pessimistic about there being more T3 than T1 or T2 developers. Just so long as the T1 and T2 developers end up generally more successful, I'm sure the change is inevitable.
Alberto, I agree that T3 outnumbers T1 + T2. At least thats the feeling I have after speaking to a number of developers.
One of the things I would like to discuss with the community is "how feasible TDD really is?" Writing tests for every trivial piece of code seems tedious and boring and I have a strong feeling that this may be something that prevents a lot of developers from continuing writing unit tests. It would be interesting to hear what others think about this...
I would also like to hear from developers who practice TDD and if they find the practice really rewarding...
>One of the things I would like to discuss with >the community is "how feasible TDD really is?"
Hi Nitin,
I should clarify that in this case I am talking about ANY type of developer/unit testing, not just TDD. I am counting developers who write and use unit tests with some consistency; whether in test-driven fashion, during, or even after development. The numbers would be considerably smaller for the TDD gene.
I think that regular test infection (i.e. not the TDD variety) is an easier path to achieve broader overall adoption of developer testing. Most un-infected and un-exposed developers I talk to understand unit testing and why they should do it. Like diet and exercise, knowing what's good for you does not mean that you'll actually do it. But at least they understand and agree that they should be doing it.
On the other hand, when I explain and demonstrate TDD I get mostly confused or quizzical looks and comments: "Say that again: "You want me to write the tests before I write the code? That's crazy talk!"
I think the answer is simply that the majority of programmers learn their trade "on the job" or, if they're lucky, get sent on a one week basic introductory course to whatever language they are going to use. After that they never seem to seek nor get offered any further training.
They are 'journeymen'. They do what they do for a living and give it no more thought once they leave the office. In a few years time they'll be promoted to managers or transfer to personnel or whatever. To them programming is just another office task.
The ones who have a genuine interest in programming as a skill, or a hobby, are a surprisingly small minority. They may make up almost the entire on-line population, or be everyone you meet at a conference, but don't let that fool you; they're still a minority. Even the concept of limiting the size of a method is alien to many of these people. In my company, it's still not particularly unusual to see five-hundred line methods. In the absence of peer review then who's going to stop it happening?
Alternatively, look at any coding problems forums. Questions are either trivially simple - and get piles of duplicated answers - or genuinely taxing - and get largely ignored. (The third category is homework questions where the recipient has no interest in either doing the work or looking at your answer but just wants something that they can show their tutor.)
Set against this, I'd say a 30% take-up rate for unit testing is amazingly high not low.
> <p><strong>Major Victory</strong>: Developer testing > becomes a standard software development > practice. The rule, instead of the exception. Releasing > software without > unit/developer testing is as unthinkable as releasing > software without > QA.</p>
Hmmmm...that would imply that in the majority of environments releasing code without QA is unthinkable...
I think you're so far ahead of the pack that they can no longer see you anymore.
Anyway, I went through 3 phases with TDD: 1. Huh? 2. Jeeze, that looks like a lot of work... 3. Oh, it's what I've always done + lots of dogma
> On the other hand, when I explain and demonstrate TDD I get mostly confused or quizzical looks and comments:
And yet ... I would wager that, of the people who are doing unit testing regularly, the percentage who do TDD is quite high. Part of the paradox is that TDD sounds crazy but the alternative - the "sensible way" (test last) - is hard and tedious and boring and people stop doing it after a while.
As long as you have an 'optional' part of development that is hard and tedious and boring people will not do it. TDD addresses this problem by making testing a core part of the development cycle...and it actually becomes fun. More fun than debugging anyway.
Kevin www.junitfactory.com Send us Java. Get JUnit back. For free.
>They are 'journeymen'. They do what they do for a >living and give it no more thought once they >leave the office.
Hi Vincent. Thank you for adding that perspective. Living in Silicon Valley (land of the geeks), it's very easy to forget that the majority of people doing software development were not born geeks drawn to it by passion, but because it's a pretty darn good way to make a living.
However, my guesstimates for the T1/T2/T3 ratios would not change much if I counted only the true alpha-geeks with a passion and a true talent for programming. As a matter of fact, these are the people I deal, and have dealt, with most of the time (having been fortunate enough to have worked at places like Sun, Google, and Agitar). I started thinking about the test gene theory not by looking at a uniform slice of the developer community, but at the developers in my "backyard" - most of whom don't qualify as 'journeyman'.
As a matter of fact, when we started Agitar our VCs had us do due-diligence with some "big name" software developers; People who have made major contributions to programming languages, operating systems, etc. I still remember when one of the biggest contributors to Java (no, not James Gosling nor Bill Joy) told us straight to our face something along the lines: "I am a developer. I don't write tests." That was a short and very uncomfortable meeting.
Bottom line. I agree 100% with your important distinction and distribution of "true" developers vs. "journeymen", but I don't think it changes the distribution of T1, T2, and T3s much.
My organisation found that the type of 'encouragement' that worked was a combination of carrot and stick.
Sticks: (a) If the peer review does not see unit testing code then it is rejected and recorded as such. (b) If QA does not see unit testing results when the unit was handed over to them, it is rejected and recorded as such. (c) Persistent failures in these two areas mean that coders are given less responsible (i.e. interesting) tasks until they prove their worth to the project/company again.
Carrots: (a) High pass rate through Review and QA process is rewarded, both to the individual coder and the team they are in. (b) Senior managers and Line managers are also rewarded by having successful coders in their teams. (c) Our customers are also pleased with the quality improvement, such that some of them award 'bonuses' to the team working on their issues.
The result was that within a few months, unit testing was seen as normal and desirable by all parties concerned. Even though more time is devoted to peer reviews and coding unit tests (with schedules planned accordingly), and there are more administration overheads; the plesant feeling of producing quality and being recognized as a company asset, overcome these.
Note that unit testing does not automatically improve quality, but it certainly improves the probablity of achieving high quality.
Writing unit tests before writing implementation code can detect errors in the specification and/or requirements. Writing unit tests after the implemenation tends to test what has been implmented rather than test the specification, and often leads to supporting the impression that bad specifications are actually okay, which are then costly to fix after it has left the coder.
> One of the things I would like to discuss with the > community is "how feasible TDD really is?" Writing tests > for every trivial piece of code seems tedious and boring > and I have a strong feeling that this may be something > that prevents a lot of developers from continuing writing > unit tests. It would be interesting to hear what others > think about this...
My personal feeling is that unit tests a really good idea for a small portion of the code I write. And actually, it's the trivial pieces that it works best for. The larger portion of the code I work doesn't rely only on the correctness of the component pieces (or units) but their interaction together.
I think a lot of people are writing 'unit tests' to test these higher-level interactions but I think there's a difference and the way to test them is different too. What I call this is regression testing and personally I think that the testing community has put much too little effort into making it easy and automated. In the work I've done lately, you basically have a process that takes some sort of input and based on a lot of environment related resources and factors produces some sort of document. Unit testing frameworks are really not very helpful with this. What you need to test this is a repository of correct outputs related to canned inputs. If you find a bug in production that your test didn't catch, you add tests. If you change the code, you need to test with filters (ignoring expected changes) and then eventually update the output files. Once I got this going, unit-tests seemed really rudimentary.
> Living in Silicon Valley (land of the geeks), it's very > y easy to forget that the majority of people doing > software development were not born geeks drawn to it by > passion, but because it's a pretty darn good way to make a > living.
Wow, two unsupported claims in one sentence: 1) A higher percentage of Silicon Valley geeks are doing it out of passion. 2) The majority of people doing software development aren't doing it out of passion.
Is this just sloppy thinking or is there an ad hominem argument implied here: that those without the "testing gene" are less geeky than those who have it?
I think one way to encourage T2 to become T1 would be through better tool support in IDEs. As of now IDEs are mostly concentrating on making coding easier and very little on generating tests.
> I think one way to encourage T2 to become T1 would be > through better tool support in IDEs. As of now IDEs are > mostly concentrating on making coding easier and very > little on generating tests.
What do you mean by this? Do you think that IDEs should generate tests from the code that you've written? That would seem to mean that the tests come after the code. Also; being generated from the code, I would imagine that those tests would fail to find any bugs since any such bugs would form part of the definition of the tests.
Given that the tests should be produced before the code; what form of assistance would you expect from the IDE?
I typically spend 1.5 to 2 hours in testing for every hour of code development. Using my methods I've developed complex financial applications that have run in production without any bugs at all!
Yet, I do not follow TDD. TDD is simply a buzzword driven fad that will soon be confined to the dustbin of history, along with many other fads (such as Java and C#).
Can TDD always lead to the most appropriate algorithm for doing things? Does not TDD focus at too low a level?
A contrarian view to stir up the pot, and hopefully force the religious adherents of TDD to think about their dogmatic views.
>> what form of assistance would you expect from the IDE?
I would imaging IDE can generate tests using some sort of graphical representation of requirements and the signature of the method under test. The test definitely should not be generated from the code, which will not be there in any case as per TDD.
Flat View: This topic has 30 replies
on 3 pages
[
123
|
»
]