|
Re: Continuous Integration Hell
|
Posted: Jan 26, 2004 7:01 AM
|
|
> >But, when you do that you are doing something almost akin > > >to requirements gathering, you are figuring out how > >things you depend upon fail and using that to drive > >change to your software. > > Why yes, of course. For me that is the big T in testing. > I don't test to find out stuff i already know. > The little t is to drive the design.
Then there's the case of things that we think we know are true, but aren't. Or, once were true but because of a bad change, aren't any longer.
> If on win32 the directory bits don't work > when archive is set, or some other whacky thing, > then i need to adjust my code. I need to know > that before i release my code into the world. > Having working code in the deployed > environment is the point. If your code breaks > but you say it worked with the mocks, then you > aren't doing your job. I can't get behind > a process that doesn't go to extremes to > eliminate bugs.
I don't think anyone advocated that. Everyone is responsible for their work, and people need whatever number of tests it takes to be confident in their code. When you can't feel confident in your code regardless of the number of tests, often it's a serious design problem.
There is a reason to have tests even for things that you are confident in (and I think that's the difference between our positions) and that's to prevent bugs. People make mistakes when they change old code at times, that's just life. Tests for "known" bits of code keep them known and make change easier. People don't have to fall into the that conservative mind-set that often causes design to degrade: "let's change this code as little as possible."
Going to the extreme to eliminate bugs? I'm all for it, as long as people don't forget that you can prevent bugs with tests also.
|
|