The Artima Developer Community
Sponsored Link

Weblogs Forum
Religion's Newfound Restraint on Progress

55 replies on 4 pages. Most recent reply: Apr 11, 2008 8:15 AM by Matt Drayer

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 55 replies on 4 pages [ « | 1 2 3 4 | » ]
Kay Schluehr

Posts: 302
Nickname: schluehk
Registered: Jan, 2005

Re: Religion's Newfound Restraint on Progress Posted: Oct 10, 2007 10:49 PM
Reply to this message Reply
Advertisement
> Just as its absurd to say that TDD fails by pointing to
> projects that don't follow it, you can't conclude that a
> post-coding testing process fails by pointing to projects
> that don't actually do it.

No one claimed that. Can you point to something someone actually said instead of halucinating an argument that isn't present? No one talked here about project failure either.

Jeff Ratcliff

Posts: 242
Nickname: jr1
Registered: Feb, 2006

Re: Religion's Newfound Restraint on Progress Posted: Oct 11, 2007 10:28 AM
Reply to this message Reply
> > Just as its absurd to say that TDD fails by pointing to
> > projects that don't follow it, you can't conclude that
> a
> > post-coding testing process fails by pointing to
> projects
> > that don't actually do it.
>
> No one claimed that. Can you point to something someone
> actually said instead of halucinating an argument that
> isn't present? No one talked here about project failure
> either.

I think you missunderstood me. I didn't say anything about project failure, but about testing methodologies failing to produce good testbases. You said:

> That the latter won't happen is not just an arbitrary
> theoretical assumption. You can go making field research
> and just count the number of projects without having a
> proper testbase. For obvious reasons none of them uses
> TDD.

I assumed that your point was that projects that used other testing methods failed to produce good testbases. But if a project was really doing post-coding unit testing correctly, there's no reason to suspect the results would be worse than using TDD.

And again, while the projects you refer to may not have used TDD, there's no reason to believe that a poorly performed TDD process can't produce poor results just as with any other process.

Some argue that if you don't do TDD properly, it really isn't TDD, but you could say that about any method. You can't eliminate human error by "definition".

Kay Schluehr

Posts: 302
Nickname: schluehk
Registered: Jan, 2005

Re: Religion's Newfound Restraint on Progress Posted: Oct 11, 2007 12:11 PM
Reply to this message Reply
> > > Just as its absurd to say that TDD fails by pointing
> to
> > > projects that don't follow it, you can't conclude
> that
> > a
> > > post-coding testing process fails by pointing to
> > projects
> > > that don't actually do it.
> >
> > No one claimed that. Can you point to something someone
> > actually said instead of halucinating an argument that
> > isn't present? No one talked here about project failure
> > either.
>
> I think you missunderstood me. I didn't say anything about
> project failure, but about testing methodologies failing
> to produce good testbases.

Yes, thank you for clarification.

> You said:
>
> > That the latter won't happen is not just an arbitrary
> > theoretical assumption. You can go making field
> research
> > and just count the number of projects without having a
> > proper testbase. For obvious reasons none of them uses
> > TDD.
>
> I assumed that your point was that projects that used
> other testing methods failed to produce good testbases.
> But if a project was really doing post-coding unit testing
> correctly, there's no reason to suspect the results would
> be worse than using TDD.

Yes, but hardly any project team does and I've never seen a project plan that contained efforts for writing UTs at the end of a schedule. And as the project size increases it becomes ever more unlikely that whitebox tests will be retrofitted into the code base.

Projects manage to do post coding functional tests, which are performed by testers who are no coders for good reasons: they shall test against requirement specifications and not against code. Reading code to interpret the spec and writing tests accordingly shall be prevented. These tests have impact on the overall product quality but not on the code quality ( unless you deliver the code which seems to be the favourable product for some happy few ). So it makes sense to evolve tests at least parallel to the code base ( I'm not an adherent of TDDs strictness for pragmatic reasons - see my comment above - but it is most consistent in this respect ).

> And again, while the projects you refer to may not have
> used TDD, there's no reason to believe that a poorly
> performed TDD process can't produce poor results just as
> with any other process.

Poor coding quality cannot be eliminated by any process whatsoever. It can be just tamed by giving ability to transform the mess without uncontrollable breaks.

> Some argue that if you don't do TDD properly, it really
> isn't TDD, but you could say that about any method. You
> can't eliminate human error by "definition".

That's just the nature of making definitions. You always run into a Koan like paradox: without a definition you don't have a concept and everything evaporates in guesses and vaguaries but making a definition you are in danger falling into literary belief: anything not being covered by your definition doesn't make sense anymore.

Michael Feathers

Posts: 448
Nickname: mfeathers
Registered: Jul, 2003

Re: Religion's Newfound Restraint on Progress Posted: Oct 11, 2007 12:35 PM
Reply to this message Reply
I agree with Cope that we have to be cautious, however, I think it's easy to overestimate how much we can settle with studies.

If we look back at all of the movements that could have be regarded as religious in software, object-orientation would have to be on the list. How certain are we now that objects are really the "the answer?" And, if we don't know, how much did our uncertainty as an industry keep us from moving in that direction? Not much, it seems. I think we've had a net benefit, but we also have to realize that we did act on incomplete knowledge, something akin to faith.

I was at JAOO a few weeks ago also, and I was very glad to meet Trygve Reenskaug for the first time. He had a wonderful session on model-level patterns, but the things that I'll remember the most from his session are the off-hand comments he made about the level of understandability in object-oriented code -- how do we really understand what is going on? Will everyone on a team understand what is going on?

It was great to hear that from one of the pioneers of OO. Some people (perhaps the majority of developers) are just much more comfortable with procedural code. Others drink up OO as it if was water. Maybe there is something we can learn from that.

I think that far more is contextual in software development that we'd wish.

I also think that that all human systems suffer something I call "social signal loss", whenever we write or talk about something, the signal that carries is something simpler than was transmitted. Some people will hear what Cope is saying now and say "well, he's saying that all Agile is baloney" when, in fact, he didn't say that. Some people, as well, will hear that "TDD helps you arrive at better designs" and not realize that TDD, like any other practice, requires people. Their experience, attitudes and skill can affect the outcome as much as any other variable introduced in the mix. It's just the way that it goes. People are central in our endeavor.

I realize this is running on... (I should've turned it into a blog), but I just wanted to mention also that one of my favorite books is The Deadline by Tom DeMarco. In it, his protagonist gets to do something that we never get to do. He gets to run large-scale controlled experiments in software development. I don't think that will ever be in our reach, but that doesn't mean that we should stop trying to gather as much empirical evidence as we can. And, I do think, however, that we have to make peace with the idea that there may not be any universal answers.. that no practice is universally good for every context and no practice is universally bad. I think that we'd all admit that, but in conversation we don't act like. Myself included. Human nature strikes again.

Jeff Ratcliff

Posts: 242
Nickname: jr1
Registered: Feb, 2006

Re: Religion's Newfound Restraint on Progress Posted: Oct 11, 2007 1:07 PM
Reply to this message Reply
> > I assumed that your point was that projects that used
> > other testing methods failed to produce good testbases.
> > But if a project was really doing post-coding unit
> testing
> > correctly, there's no reason to suspect the results
> would
> > be worse than using TDD.
>
> Yes, but hardly any project team does and I've never seen
> a project plan that contained efforts for writing UTs at
> the end of a schedule. And as the project size increases
> it becomes ever more unlikely that whitebox tests will be
> retrofitted into the code base.
>

I didn't say anything about writing UTs at the end of a schedule but rather after a unit is written. In any case, your experience (and mine) is insignificant compared to the number of projects out there. You can't assume nobody does it just because you aren't aware of it.

Nevertheless, the core issue is still that you can't compare TDD with testing after coding on the basis that "nobody does" the latter.

If in an organization the programmers aren't disciplined enough to create unit tests after writing a unit and management isn't committed enough to require them to do it, it's not likely that organization is going to be truly adopt TDD either. Discipline and commitment come from people, not the methodology used.

Doug Pardee

Posts: 1
Nickname: dougpardee
Registered: Dec, 2004

Re: Religion's Newfound Restraint on Progress Posted: Oct 12, 2007 9:57 AM
Reply to this message Reply
I'm not sure that one can really blame academia.

How many people have actually learned (practical) software development in college? Most of the people that I've run into in my career have been self-taught. Oh, they might have learned something back in college—I learned Fortran II and assembly language for an old CDC computer—but the real learning seems to come from books, from experimentation with free/gratis software, and (yes) from experience on the job.

The O'Reilly book catalog is probably the most powerful player in the shaping of software developers today. The influence of academia is almost trivial in comparison.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

1994 Simple Smalltalk testing Posted: Oct 12, 2007 8:49 PM
Reply to this message Reply
James O. Coplien wrote
-snip-
> Perhaps TDD is one way of
> compensating for the lack of static type checking (which
> Perry's research has established as a powerful way of
> reducing errors) in languages like Smalltalk.
-snip-

Will any ole static type checking do or do we need a powerful type system? :-)

First came a Smalltalk testing framework, TDD didn't show up until nearly a decade later.

"Simple Smalltalk testing" Kent Beck
Smalltalk Report 4(2) Oct 1994
http://www.macqueen.us/apache2-default/www.macqueen.us/smalltalkReport/ST/91_95/SMAL0402.PDF

John Zabroski

Posts: 272
Nickname: zbo
Registered: Jan, 2007

Re: Religion's Newfound Restraint on Progress Posted: Oct 13, 2007 1:36 AM
Reply to this message Reply
@James O. Coplien
@I'm all for testing, as long as it isn't the central focus of your quality program.

Put another way, quality is about balance and rhythm.

I also appreciate your comments about academia and its priorities. Its funny to think that if you type into google "Java Schools" and are Feeling Lucky, then the page you are directed to is Joel Spolsky's rant about The Perils of JavaSchools: http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html

Joel's rant woke me up. I must've been about 21 years old when I first read it. After thinking about Joe's rant, I started to have a better grasp about what I hated in classes. However, I don't think the difference is as simple as teaching Scheme instead of Java. And it's also not as simple as "Java is what the local industry wants". The College Board AP exam is Java-based, and it is the reason my college decided to move toward Java.

I'm 23 years old now, I graduated college in May, and I recently co-wrote a curriculum assessment for my college's Computer Science program. Students don't learn functional programming where I attended, so we had to look at this so-called Java-based curriculum critically and examine the strengths and weaknesses. To do this, we read between us 10 years of ACM SIGCSE papers and other CS education papers in order to assess where the program stood. We can't just point our noses up and say, "You should be teaching us Scheme, silly!"

The major point I want to get across is Scheme vs. Java doesn't enter into assessing whether or not logical design is being taught... the greatest strength I think a programmer can have is understanding the differences between imperative knowledge, interrogative knowledge, exclamatory knowledge and declarative knowledge. I've said this before in the comments section of others' blogs, so rather than repeating myself: http://blog.tmorris.net/imperative-programming-is-a-special-type-of-functional-programming/

As an illustration of this point, look no further than Michael Hobbs' recent blog entry about "Namespacey Programming". I was the only person to point out that there was a serious flaw in Joe Armstrong's rant about "Why OO Sucks". What Joe was missing was some form of relational constraint on his type definitions. Those constraints can manifest themselves in a number of ways, but they're inherently declarative.

Lasse Koskela

Posts: 5
Nickname: lkoskela
Registered: Oct, 2007

Regarding the referenced Siniaalto and Abrahamsson studies Posted: Oct 13, 2007 1:53 PM
Reply to this message Reply
I'll throw out a disclaimer to start with; I've just written a book on TDD.

I've had the privilege of getting a sneak peek at the referenced "Siniaalto and Abrahamsson" papers and I certainly cannot draw the kind of conclusions from their results that Mr. Coplien above implies.

First of all, the two papers have studied a total of 8 teams of which just one consisted of professional programmers. The rest were undergraduate students. Furthermore, the only statistically significant difference between the code bases of test-first and test-last teams was (and this is stated clearly in the paper) that the TDD teams' code tended to have a worse LCOM (lack of coherence in methods).

Now, what has a negative effect on LCOM? Is it that the programmer uses TDD? No. What affects LCOM most is the programmer's sensitivity for good design. The only conclusion I can draw from the fact that the TDD teams had worse LCOM is that they weren't refactoring properly--in other words, they weren't using TDD properly.

I appreciate Mr. Coplien's call for data to back up "beliefs" but studies like these are hardly what I'd call reliable data.

I also think that referencing these studies in this manner before they're available for the general public is a suspicious thing to do. I mean, remember what happened with the 1970 Royce paper?

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

embrace quality Posted: Oct 13, 2007 2:12 PM
Reply to this message Reply
Agile is about engaging that user; TDD, about engaging the code.

Or was engaging the programmer always what XP was about?

Alan Cooper: "The impression I get from XP is that it addresses the problems of corporate developers."

Extreme Programming vs. Interaction Design
http://www.ftponline.com/interviews/beck_cooper/default.asp

Whatever you think about that, maybe we can agree with Alan Cooper's emphatic Rather than "embrace change," I would say, "embrace quality."



Incidentally, I've trawled through DeWayne Perry's publication list without finding anything that might correspond to the research mention.

Incidentally, have you read that "Siniaalto and Abrahamsson" paper or did you just grab "Alarming" from the title? :-)

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

the "TDD-or-acceptance" question Posted: Oct 13, 2007 2:47 PM
Reply to this message Reply
The answer for industry, I think, is to focus on Quality and to find the techniques that work best and are the most cost-effective for your project. Just answering "both" to the "TDD-or-acceptance" question is not only at the wrong scope, it is missing the whole point.

I agree that misses the point.

It also fails to address the enormous cost of testing - when 35% of code written is test cases we have a problem.


Maybe black-box tools like Quviq QuickCheck can bring down that cost:

59 min Google video - "A Secret Weapon for Software Testing"
http://video.google.com/videoplay?docid=4655369445141008672

Sinisa Vlajic

Posts: 2
Nickname: svlajic
Registered: Aug, 2005

Re: Religion's Newfound Restraint on Progress Posted: Oct 13, 2007 2:55 PM
Reply to this message Reply
In my opinion, software life-cycle techniques TDD and Use Case Driven don't exclude each other. Use case model is basis from which Conceptual model (structure of software system) and System operations (behavior of software system) will be created. For each system operations contract will be made(abstraction level). In design phase system operations will be realized by different methods (realization level). This is point where we can estimate quality atributes(testability, reusability, maintability,...) of design (excellent tool for Java is Swat4J).
On other side, TDD will make:
a) user stories and
b) test procedures (abstraction level) for testing above methods (realization level). I think that:
*) test procedures (from TDD) and contracts for system operations (from Use-case driven) have, on the conceptual level, the same semantic.
*) XP user-stories are on higher abstraction level in regard use cases. In this sense, use cases realize user-stories.
Also, I think that comparasion between quality and testing, in sense what is better, shoot two points which are in the different software places. The quality inspects result of design and uses refactoring to enhance design. The testing inspects functionality of system operations.
At the end, I think that TDD and Use-case driven techniques give good result when exist synergism between them.

John Zabroski

Posts: 272
Nickname: zbo
Registered: Jan, 2007

Re: embrace quality Posted: Oct 13, 2007 8:21 PM
Reply to this message Reply
@Isaac Gouy
@Alan Cooper: "The impression I get from XP is that it addresses the problems of corporate developers."

What does that even mean? Is that a good or bad thing?

@maybe we can agree with Alan Cooper's emphatic Rather than "embrace change," I would say, "embrace quality."

What's the gross impact? Is it existential? Is it practical?

In my eyes, a comparison of XP and Interaction Design is being devolved to comparing two slogans.

I'm naive, but I always looked at the benefit of most of XP as synergy. In particular, continuous integration. There is a lot to like about combining phases, and it's worth noting the quote from Alan Cooper comes from a discussion about phases and whether or not they are a Good Thing.

Maxim Noah Khailo

Posts: 25
Nickname: mempko
Registered: Nov, 2004

Re: Religion's Newfound Restraint on Progress Posted: Oct 14, 2007 12:01 AM
Reply to this message Reply
I actually went to North Central College and can verify that Cope is completely correct.

There was very little effort on thinking and more emphasis on techniques in the Curriculum. Opdike was one professor that at least tried to get people to design some sort of architecture...I actually made Use Cases in one class!

In fact, I realized how "irrelevant" the curriculum was that I decided not to take any CS courses my senior year and instead spent 6 months in Japan learning pottery making and wabi-sabi and STILL managed to graduate with no problem.

Code, incidentally is very personal and people are generally very possessive. When you code, you design. When you code tests, they end up being part of the design. Depending on how you do TDD, you might end up with more code than the next guy cementing what design you get, precluding you from any sort of useful refactoring.

Michael Feathers

Posts: 448
Nickname: mfeathers
Registered: Jul, 2003

Re: the "TDD-or-acceptance" question Posted: Oct 14, 2007 5:56 PM
Reply to this message Reply
Isaac Gouy wrote:
> The answer for industry, I think, is to focus on
> Quality and to find the techniques that work best and are
> the most cost-effective for your project. Just answering
> "both" to the "TDD-or-acceptance" question is not only at
> the wrong scope, it is missing the whole point.

>
> I agree that misses the point.
>
> It also fails to address the enormous cost of testing -
> when 35% of code written is test cases we have a problem.

It's funny. When I read that, my first thought was that no one would see the cost of thinking as a problem, and yet that is exactly what accounts for the cost of TDD. You are spending time thinking about your code and in the end you not only have better quality, you have a mechanism for keeping it.

At JAOO, I gave a talk called The Ethics of Error-Prevention. In a nutshell, it was about the curious fact that we don't do enough to ensure quality, despite the fact that we have an enormous number of tools at our disposal: clean room, Hindley-Milner typing, design by contract, TDD, review and inspection, etc. Each of these has been shown to increase quality, so it's really malpractice not to use any of them. Beyond that though, it is interesting to notice that they all work, to a degree. So, what's common?

I think it's this: each of these techniques force you to pay more attention to what you are doing. And, ultimately, they pay dividends.. some more than others. The nice thing about TDD is that you don't just end up with high quality, you end up with tests which help you keep that quality under change. To me, it seems nicer than having to determine the scope of re-analysis for inspections or manual re-evaluation for clean-room style predicates. I'm paranoid. I'd rather run something that performs a set of verifications rather than convene another meeting when I have doubts.

I don't really have much patience any longer for cost-based objections to quality techniques. When you look at the cost of fixing bugs, you typically get it all back. And, I don't think that 35% of code being test code is any sense alarming. Many teams doing TDD end up with more test code than production code and they move along very swiftly.

Flat View: This topic has 55 replies on 4 pages [ « | 1  2  3  4 | » ]
Topic: Religion's Newfound Restraint on Progress Previous Topic   Next Topic Topic: Interview with Handango's Will Pinnell

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use