Re: Pardon My French, But This Code Is C.R.A.P. (2)
Posted: Jul 19, 2007 3:37 PM
> > The C.R.A.P. index has one purpose, and one purpose
> > to help predict the risk, effort, and pain...
> Er, that would be three things. In particular measuring
> risk of change and effort of change seem very different.
> I'm not sure what is meant by pain here.
Vincent, I am sure some people think you are a bit too pedantic :-), but I actually appreciate the fact that you are bringing up some point that require clarification. My sincere thanks. No smiley here - I mean that.
In my experience as a developer and manager I have regularly experienced a strong correlation between risk, effort, and pain when maintaining other people's code. Usually, when working with other people's code is particularly painful, it is also (or feels like) more of an effort and it also feels much riskier. But, as I am sure you'd point out, correlation does not imply causality.
The C.R.A.P. index is aimed at Change Risk. The fact that more risk often means more effort, and more effort often means more pain (i.e. longer hours, stop-and-go release cycles, etc.) is secondary.
> > In that respect, our intuition, experience, and
> > indicates that complexity makes code maintenance harder
> > and automated tests make maintenance safer.
> Again, two things appear to be being conflated.
> Complexity and safety.
Well, conflating complexity and safety is probably wrong, but I was associating, not conflating complexity and risk. Again, I believe (based on personal experience, intuition, and other people's research) that code complexity makes code changes riskier (i.e. less safe).
> I can imagine situations where
> e making code safer might require added complexity.
We are not talking about making the code safer, we are talking about making changes to the code safer.
> > If you don't think that this is the case, please feel
> > to give us suggestions and recommendations for
> > the metric.
> I think the only way to improve the metrics is to measure
> them for real. In the formula you gave, you implied that
> "crapness" was proportional to the square of the
> complexity and the cube of the proportion of the code not
> covered by automated tests. Do these suspiciously precise
> ratios have any empirical basis or are they just plucked
> out of the air?
I am sure you have heard of the method of successive approximations for solving equations. This approach requires an initial educated guess and then, through experience, experiments, etc., the tentative solution is adjusted over time and, hopefully, it moves closer to the real solution. Basic trial and error.
That's what we are trying to do. This is why we presented this as version 0.1, are soliciting input, and have started running in-house experiments on our code and on open source to see how we need to adjust the various parameters to better match reality. I don't think I could have been more explicit about this.
If I had come out and said: "Listen all, we have found the perfect and ultimate metric to predict code change risk and here it is!" then you'd have a point, but I presented this initial version of the C.R.A.P. index with plenty of humility and in the knowledge that it's a first effort and that we have plenty of work left to do.
> I applaud your motivation for seeking to measure code
> quality, however, I do think you need to be a lot more
> precise in defining what it is that you are seeking to
> measure and, when you present mathmatical formulae, you
> must be able to justify the terms in some way.
Again: we are not seeking to measure code QUALITY. We are in the midst of developing and testing a metric to help predict the risk associated with code change. It has nothing to do with code quality.
As far as justifying the terms goes ... considering that this is a blog and not a scientific paper, and that we have been clear that this is not a final version of the metric but a first pass, I believe that the level of justification is more than adequate.
But to be clear. We did not pull the numbers out of thin air. We studied previous research, plugged in various parameters, took code samples and ran the metric on it to see if the numbers we came up made any sense and were in the "ballpark".
What we presented as version 0.1, is the actually latest of many different formulae we have tried and the result of several months of in-house research and experimentation at Agitar Labs. Not to mention several decades of in-house research experience in academia and industry by myself, my co-founder and our research staff.
We believe it's important to make progress in the area of risks associated with code changes. Progress requires experience and experiments. Experiments require a starting point, an initial conjecture. That's what C.R.A.P. 0.1 is. Please treat it accordingly.