The Artima Developer Community
Sponsored Link

Weblogs Forum
Software Metrics Don't Kill Projects, Moronic Managers Kill Projects

50 replies on 51 pages. Most recent reply: May 16, 2008 1:38 PM by Thomas Cagley

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 50 replies on 51 pages [ « | 1 ... 47 48 49 50 51 | » ]
Alberto Savoia

Posts: 95
Nickname: agitator
Registered: Aug, 2004

Re: Software Metrics Don't Kill Projects, Moronic Managers Kill Projects Posted: Dec 19, 2007 3:56 PM
Reply to this message Reply
Advertisement
Cem Kaner wrote:
-----------------------------------------------------------
I am suggesting a huge increase in the humility index associated with the statistics we collect from our development and testing efforts. I am suggesting a fundamental refocusing on the questions we are trying to answer rather than the statistics we can easily compute that maybe answer maybe some of the questions maybe to some unknown degree with some unconsidered risk of side effects. I am suggesting that we take the risks of side effects more seriously and consider them more explicitly and manage them more thoughtfully. And I am saying that we demand research that is much more focused on the construct and predictive validity of proposed metrics, with stronger empirical evidence--this is hard, but it is hard in every field.
------------------------------------------------------------

Cem,

I agree with everything you say in the above paragraph. Believe it or not, the goals of C.R.A.P when we started are very similar to the ones you state. Especially the humility part (although, given my personality that translates into "let's not take ourselves too seriously), focusing on specific attributes, collecting data, doing more research, keep the metric and the thinking behind it open so people can do their own experiments, etc.

Here's some unedited text from one of the earliest C.R.A.P. posts in July of this year:

-----------------------

Below is some of our thinking behind the C.R.A.P. index:

[] We believe that software metrics, in general, are just tools. No single metric can tell the whole story; it’s just one more data point. Metrics are meant to be used by developers, not the other way around – the metric should work for you, you should not have to work for the metric. Metrics should never be an end unto themselves. Metrics are meant to help you think, not to do the thinking for you.

[] We believe that, in order to be useful and become widely adopted, a software metric should be easy to understand, easy to use, and – most importantly – easy to act upon. You should not have to acquire a bunch of additional knowledge in order to use a new metric. If a metric tells you that your inter-class coupling and coherence score (I am making this up) is 3.7, would you know if that’s good or bad? Would you know what you need to do to improve it? Are you even in a position to make the kind of deep and pervasive architectural changes that might be required to improve this number?

[] We believe that the formula for the metric, along with various implementations of the software to calculate the metric should be open-source. We will get things started by hosting a Java implementation of the C.R.A.P. metric (called crap4j) on SourceForge.

[] The way we design, develop, and deploy software changes all the time. We believe that with software metrics, as with software itself, you should plan for, and expect, changes and additions as you gain experience with them. Therefore the C.R.A.P. index will evolve and, hopefully, improve over time. In that spirit, what we present today is version 0.1 and we solicit your input and suggestions for the next version.

[] We believe that a good metric should have a clear and very specific purpose. It should be optimized for that purpose, and it should be used only for that purpose. The more general and generic a metric is, the weaker it is. The C.R.A.P. index focuses on the risk and effort associated with maintaining and changing an existing body of code by people other than the original developers. It should not be abused or misused as a proxy for code quality, evaluating programmers’ skills, or betting on a software company’s stock price.

[] Once the objective for the metric is established, the metric should be designed to measure the major factors that impact that objective and encourage actions that will move the code closer to the desired state with respect to that objective. In the case of C.R.A.P., the objective is to measure and help reduce the risks associated with code changes and software maintenance – especially when such work is to be performed by people other than the original developers. Based on our initial studies and research on metrics with similar aims (e.g., the Maintainability Index from CMU’s Software Engineering Institute) we decided that the formula for version 0.1 of the C.R.A.P. index should be based on method complexity and test coverage.

[] There are always corner cases, special situations, etc., and any metric might misfire on occasion. For example, C.R.A.P. takes into account complexity because there is good research showing that, as complexity increases, the understandability and maintainability of a piece of code decreases and the risk of defects increases. This suggests that measuring code complexity at the method/function level and making an effort to minimize it (e.g. through refactoring) is a good thing. But, based on our experience, there are cases where a single method might be easier to understand, test, and maintain than a refactored version with two or three methods. That’s OK. We know that the way we measure and use complexity is not perfect. We have yet to find a software metric that’s right in all cases. Our goal is to have a metric that’s right in most cases.

...

Software metrics have always been a very touchy topic; they are perfect can-of-worms openers and an easy target. When we started this effort, we knew that we’d be in for a wild ride, a lot of criticism, and lots of conflicting opinions. But I am hopeful that – working together and with an open-source mindset – we can fine tune the C.R.A.P. index and have a metric to will help reduce the amount of crappy code in the world.

OK. Time for some feedback – preferably of the constructive type so that C.R.A.P. 0.2 will be better than C.R.A.P. 0.1.


----------------

I'd like to think that the above thinking provides evidence on our part of humility, awareness of the many inadequacies of any metric, potential for misuse, need for focusing on specific attributes (which, for CRAP is maintainability by developers others than the original developers - not quality), testing the predictive power, etc.

Cem Kaner wrote:

-----------------------------------------------------------
And I am saying that we demand research that is much more focused on the construct and predictive validity of proposed metrics, with stronger empirical evidence--this is hard, but it is hard in every field.
-----------------------------------------------------------

I want that too; but in order to test the construct validity and predictive value we need to have some metrics to test with, some people willing to use them on their projects (real world projects) and also willing to share the data as well as their opinion of the metric "readings". You say, "this is hard", and I could not agree more but we gotta start somewhere. The latest version of crap4j offers an embryonic mechanism to encourage data sharing. It's very, VERY, primitive and limited at this time but you can get a flavor of it at:
http://crap4j.org/benchmark/stats/ and use your imagination for how it might be evolved and used.

We don't want to do this work alone. We are looking for other people to push-back, propose and test with completely different measures and formulae, etc. That's why all the code is open-source. Of course, it would be great to have a combination of industry and academic people working on "the next generation of metrics". Given how strongly and passionate you feel about the topic, is this something that you (or some of your students/colleagues) might be interested in?

Alberto

P.S.

Cem Kaner wrote:
------------------------------------------------------------
Alberto, I wrote my last note (the one on your blog post that follows up to this post), speaking to you by name, because I respect you enough and like you enough to say that I'm disappointed. I've spent a lot of writing hours on this thread--usually I skip blog posts on what I think of as overly simplistic approaches to software measurement, but I put a lot of time into this one because it is your thread. That makes it worth my attention.
------------------------------------------------------------

Hopefu lly, by reading some of the material in this reply (as well from previous posts) gives you a bit more context for my last two posts and a better perspective on what we are trying to accomplish.

I also spent a lot of writing hours on these replies for the same reasons you mention (including last night past 11PM - when I told my wife what I was doing she thought I was crazy :-)). I appreciate the respect, return it several fold, and - if at all possible from your end - I would love an opportunity to continue this discussion offline and see if we can find a way to work together, or at least with more awareness of each other, going forward.

Alberto

Flat View: This topic has 50 replies on 51 pages [ « | 47  48  49  50  51 | » ]
Topic: Software Metrics Don't Kill Projects, Moronic Managers Kill Projects Previous Topic   Next Topic Topic: Thinking in Java 4e

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use