The Artima Developer Community
Sponsored Link

Weblogs Forum
Programming as Typing

57 replies on 4 pages. Most recent reply: Jul 12, 2006 12:20 PM by James Watson

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 57 replies on 4 pages [ « | 1 2 3 4 ]
benjamin schmaus

Posts: 1
Nickname: schmausb
Registered: Aug, 2005

Re: Programming as Typing Posted: Jul 11, 2006 7:14 AM
Reply to this message Reply
Advertisement
> I have since thought of an even better analogy than
> writing novels, which should have been obvious since I've
> acted in a number of plays recently and because
> programming is more often a team activity. One difficult
> person in a cast can easily screw everything up. And
> choosing cast and crew must be done with care, selecting
> each person for the way their skills fit with the need.
> Actors are not interchangeable; you must get the right
> ones to do the job.

I like both the novel and the play analogies that you've mentioned.

I think that another good analogy is playing in a band, where you have multiple musicians contributing ideas and coordinating their efforts to create something greater than the sum of their individual inputs. The particular musicians in a group can make a huge difference in the music that is actually produced.

Writing novels or a piece of music can also be considered similar in that while the author is writing a given chapter or passage, they need to be aware of how that small piece relates to the larger whole.

As you say, in software the artifact of thought is code, in writing it's a story, and in music it's a song. In each case abstract thought is turned into something useful. In the case of the latter two, the manifestation of thought doesn't seem as abbreviated as it is in software (at least source code by itself).

Jeff Ratcliff

Posts: 242
Nickname: jr1
Registered: Feb, 2006

Re: experiments & dogma Posted: Jul 11, 2006 11:01 AM
Reply to this message Reply
>
> You can also use logic.

Sure, but the problem with logic is that you have to agree both with the premises and the validity of the logic used to draw the conclusions. It's very difficult to get agreement on the premises and it's also difficult to frame the argument in such a way that it can be proven logically using accepted techniques.

In practice, you usually have to settle for plausible arguments that aren't logically proven.

For example it's plausible that two programmers working together would write better code because "two heads are better than one". On the other hand it's also plausible that two programmers working together will be less effective because "too many cooks spoil the broth".

> In your terminology, does 'formal' mean the process comes
> from outside the team?

No, not necessarily. As you know, code doesn't spring from the ground, it has to be written by programmers. So there must be some process involved even if it's just: write code, compile it and run it. So a "formal" process is one that has been explicitly defined.

> There have also been some spectacular failures.

Of course. As there has been with all formal methodologies as well. My point is that a formal methodology is neither necessary nor sufficient to ensure success. We are left to argue about which process (formal or not) will give the best chance of success.

>
> I have a hard time imagining how a team of more than
> several developers can create a non-trivial piece of
> software without any structure but perhaps that's what you
> mean by ad-hoc.

I agree that there are greater organizational issues with a larger team, but I don't see formal methodologies providing much insight into issues of scale. They pretty much require that the same rules be followed regardless of the size of the team.

James Watson

Posts: 2024
Nickname: watson
Registered: Sep, 2005

Re: experiments & dogma Posted: Jul 11, 2006 1:46 PM
Reply to this message Reply
> For example it's plausible that two programmers working
> together would write better code because "two heads are
> better than one". On the other hand it's also plausible
> that two programmers working together will be less
> effective because "too many cooks spoil the broth".

Yes but, as I am sure you are aware, you can't allow yourself to be frozen by your inability to 'prove' a course of action is better. Sometimes you have to go with your gut. Now that I write that, I recall reading something about how certain individuals are able to make suprisingly good 'gut decisions' and the hypothesis that this is one of the characteristics of a great leader.

> Of course. As there has been with all formal methodologies
> as well. My point is that a formal methodology is neither
> necessary nor sufficient to ensure success. We are left to
> argue about which process (formal or not) will give the
> best chance of success.

Right. My point is that we cannot just stick our heads in the sand and say "where's the data?" The decision requires assesments that will be based on things that are not strictly empirical. I'm really arguing against the idea that "if it isn't measureable, it doesn't matter" that's one of the 'really bad ideas' introduced into organizations in recent times. Some of the dumbest things I see in my current job are the direct result of this idea.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: experiments & dogma Posted: Jul 11, 2006 3:06 PM
Reply to this message Reply
James Watson wrote
-snip-
> Right. My point is that we cannot just stick our heads in
> the sand and say "where's the data?" The decision
> requires assesments that will be based on things that are
> not strictly empirical. I'm really arguing against the
> idea that "if it isn't measureable, it doesn't matter"
> that's one of the 'really bad ideas' introduced into
> organizations in recent times. Some of the dumbest things
> I see in my current job are the direct result of this idea.

The difference between "it isn't measurable" and "we can't be bothered measuring it" is significant.

We can measure our current baseline.
We can claim what will improve when we change X.
We can change X and measure our new baseline.
When can verify our earlier claim.


More broadly, we might not know about experimental research in software development that doesn't mean it isn't being done -

Based on 29 projects at HP and Agilent
http://www.cc.gatech.edu/classes/AY2005/cs6300_fall/papers/maccormack.pdf

Based on 29 projects at 17 companies
http://www.sloanreview.mit.edu/smr/issue/2001/winter/6/

Jeff Ratcliff

Posts: 242
Nickname: jr1
Registered: Feb, 2006

Re: experiments & dogma Posted: Jul 11, 2006 3:12 PM
Reply to this message Reply
> Yes but, as I am sure you are aware, you can't allow
> yourself to be frozen by your inability to 'prove' a
> course of action is better. Sometimes you have to go with
> your gut. Now that I write that, I recall reading
> something about how certain individuals are able to make
> suprisingly good 'gut decisions' and the hypothesis that
> this is one of the characteristics of a great leader.

I first learned about the "do nothing" option in engineering school. It's often more efficient to stay the course than try unproven solutions.

As far as 'gut decisions" are concerned, the problem is that not everyone's guts say the same things. Another definition of a leader is a person who thinks their guts are better than everybody elses and is aggressive enough to persuade the powerful to go along.

> Right. My point is that we cannot just stick our heads in
> the sand and say "where's the data?" The decision
> requires assesments that will be based on things that are
> not strictly empirical. I'm really arguing against the
> idea that "if it isn't measureable, it doesn't matter"
> that's one of the 'really bad ideas' introduced into
> organizations in recent times. Some of the dumbest things
> I see in my current job are the direct result of this idea.

The problem is that there has to be some objective criteria if the goal is to make the best decision. Otherwise it's like arguing about God: How can I select one religion over another when they all claim they're right?

In my view we asking the wrong question: What methodology should we make everyone use no matter who they are or what the project is? As working programmers we don't need to know what is best in all situations, we only need to know what's best for our own.

Why can't we take advantage of what we know about a project and the people working on it rather than relying on the dead hand of people who don't know anything about it?

If a couple of programmers work really well together why not let them do pair programming? If others work best on their own, why not let them? Let's not get dogged by the dogma.

James Watson

Posts: 2024
Nickname: watson
Registered: Sep, 2005

Re: experiments & dogma Posted: Jul 12, 2006 6:38 AM
Reply to this message Reply
> > Yes but, as I am sure you are aware, you can't allow
> > yourself to be frozen by your inability to 'prove' a
> > course of action is better. Sometimes you have to go
> with
> > your gut. Now that I write that, I recall reading
> > something about how certain individuals are able to
> make
> > suprisingly good 'gut decisions' and the hypothesis
> that
> > this is one of the characteristics of a great leader.
>
> I first learned about the "do nothing" option in
> engineering school. It's often more efficient to stay the
> course than try unproven solutions.

Doing nothing is, in this context, a course of action. And like many others it's usually not possible to prove it is the best choice.

> As far as 'gut decisions" are concerned, the problem is
> that not everyone's guts say the same things.

That's why not all leaders succeed.

> Another
> definition of a leader is a person who thinks their guts
> are better than everybody elses and is aggressive enough
> to persuade the powerful to go along.

That person may be a leader but I would call that leader 'great'.

> > Right. My point is that we cannot just stick our heads
> in
> > the sand and say "where's the data?" The decision
> > requires assesments that will be based on things that
> are
> > not strictly empirical. I'm really arguing against the
> > idea that "if it isn't measureable, it doesn't matter"
> > that's one of the 'really bad ideas' introduced into
> > organizations in recent times. Some of the dumbest
> things
> > I see in my current job are the direct result of this
> idea.
>
> The problem is that there has to be some objective
> criteria if the goal is to make the best decision.
> Otherwise it's like arguing about God: How can I select
> one religion over another when they all claim they're
> right?

I'm not sure if you are religious (I am not) but people do this all the time.

Let's make sure we are talking about the same thing. Suppose you are leading a platoon of men in battle. You approach an area that is a great place for an ambush. You do not detect an enemy in the area.

According to the logic of the "not measureable -> doesn't matter" mantra, you should not waste time making sure the area is clear. You can't measure the risk in this situation.

It's pretty clear to me that that line of reasoning is pure crap. The vast majority of information available for a making a choice is not quantifiable. Some of the things that are quantifiable are not siginificant and/or are not accurate.

> In my view we asking the wrong question: What methodology
> should we make everyone use no matter who they are or what
> the project is? As working programmers we don't need to
> know what is best in all situations, we only need to know
> what's best for our own.

I agree.

> Why can't we take advantage of what we know about a
> project and the people working on it rather than relying
> on the dead hand of people who don't know anything about
> it?

Ding-ding-ding! You, sir, are a winner!

> If a couple of programmers work really well together why
> not let them do pair programming? If others work best on
> their own, why not let them? Let's not get dogged by the
> dogma.

Marry me!

James Watson

Posts: 2024
Nickname: watson
Registered: Sep, 2005

Re: experiments & dogma Posted: Jul 12, 2006 6:46 AM
Reply to this message Reply
> James Watson wrote
> -snip-
> > Right. My point is that we cannot just stick our heads
> in
> > the sand and say "where's the data?" The decision
> > requires assesments that will be based on things that
> are
> > not strictly empirical. I'm really arguing against the
> > idea that "if it isn't measureable, it doesn't matter"
> > that's one of the 'really bad ideas' introduced into
> > organizations in recent times. Some of the dumbest
> things
> > I see in my current job are the direct result of this
> idea.
>
> The difference between "it isn't measurable" and "we can't
> be bothered measuring it" is significant.
>
> We can measure our current baseline.

In what way do you mean we can measure the baseline? Lines of code? Defects? Maintainability?

> We can claim what will improve when we change X.

I say changing a given line of code will make the code more maintainable.

> We can change X and measure our new baseline.

Remeasure the maintainablilty.

> When can verify our earlier claim.

Success!

> More broadly, we might not know about experimental
> research in software development that doesn't mean it
> isn't being done -
>
> Based on 29 projects at HP and Agilent
> http://www.cc.gatech.edu/classes/AY2005/cs6300_fall/papers/
> maccormack.pdf
>
> Based on 29 projects at 17 companies
> http://www.sloanreview.mit.edu/smr/issue/2001/winter/6/

The first looks good article and backs up a lot of my personal theories. I think the second requires moolah.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: experiments & dogma Posted: Jul 12, 2006 8:18 AM
Reply to this message Reply
James Watson wrote
> In what way do you mean we can measure the baseline?
> Lines of code? Defects? Maintainability?

Maintainability sounds like a bundle of a dozen more specific things.

> > We can claim what will improve when we change X.
> I say changing a given line of code will make the code
> more maintainable.
>
> > We can change X and measure our new baseline.
> Remeasure the maintainablilty.

Remeasure the dozen more specific things

> > When can verify our earlier claim.
> Success!

Or not.


http://www.cc.gatech.edu/classes/AY2005/cs6300_fall/papers/maccormack.pdf
http://www.sloanreview.mit.edu/smr/issue/2001/winter/6/

> The first looks good article and backs up a lot of my
> personal theories. I think the second requires moolah.

Both articles have the advantage of being about software development rather than about writing novels, putting on a play, military technology, playing in a band, leading a platoon of men in battle... :-)


Previously "There are lots of things in life that are difficult if not impossible to qunatify and it's extremely dangerous to ignore or disregard things that cannot be proven scientifically."

As an uninteresting rejoinder we might say - it can be as dangerous to ignore that there are lots of things in life that can be quantified and investigated scientifically.

But that would be to give more credit to the argument than it deserves. I think we were just discussing software development - so it doesn't much matter that "there are lots of things in life..." etc


(The second article requires 7-12USD or a large library close at hand, the webpage provides a full citation.)

James Watson

Posts: 2024
Nickname: watson
Registered: Sep, 2005

Re: experiments & dogma Posted: Jul 12, 2006 8:41 AM
Reply to this message Reply
> James Watson wrote
> > In what way do you mean we can measure the baseline?
> > Lines of code? Defects? Maintainability?
>
> Maintainability sounds like a bundle of a dozen more
> specific things.

Such as?

> > > We can claim what will improve when we change X.
> > I say changing a given line of code will make the code
> > more maintainable.
> >
> > > We can change X and measure our new baseline.
> > Remeasure the maintainablilty.
>
> Remeasure the dozen more specific things

It's easy to say this but hard to do. If you can explain what needs to be measured and how to do so, I would be swayed.

The biggest problem is that it's almost always impossible to narrow down the variables involved and show which were the cause of a certain effect. For example, it's been shown that red cars are more likely to be involved in accidents. So we can conclude that red cars cause accidents, right?

A specific example is that in the first article you site (ignoring that 29 projects is highly unlikely to be a statistically significant sample), it states that in all the projects that the one thing that was consistently associated with productivity and low defect-rates was an early prototype. But this is merely a correlation. It doesn't show a causation. It may be one or more of the required actions to create the prototype were responsible for the effect.

> Previously "There are lots of things in life that are
> difficult if not impossible to qunatify and it's extremely
> dangerous to ignore or disregard things that cannot be
> proven scientifically."

>
> As an uninteresting rejoinder we might say - it can be as
> dangerous to ignore that there are lots of things in life
> that can be quantified and investigated scientifically.

I would say that its even more dangerous that it's easy to fool onself into believing that something is scientific when it is not as the above articles are not scientific. These types of studies fall under what scientists commonly call pseudo-science.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: experiments & dogma Posted: Jul 12, 2006 9:05 AM
Reply to this message Reply
James Watson wrote
> > Maintainability sounds like a bundle of a dozen more
> > specific things.
> Such as?

problem recognition, admin delay, problem analysis, active correction, test, recovery ...

Maybe you meant something more specific by maintainability - maybe you meant Mean Time To Repair


-snip-
> A specific example is that in the first article you site
> (ignoring that 29 projects is highly unlikely to be a
> statistically significant sample), it states that in all
> the projects that the one thing that was consistently
> associated with productivity and low defect-rates was an
> early prototype. But this is merely a correlation. It
> doesn't show a causation. It may be one or more of the
> required actions to create the prototype were responsible
> for the effect.

You seem to be ignoring that the article reports which results were statistically significant.

Yes the experiment provides evidence of correlation not causation - that's a big advance on not providing evidence of correlation or causation.


-snip-
> I would say that its even more dangerous that it's easy to
> fool onself into believing that something is scientific
> when it is not as the above articles are not scientific.
> These types of studies fall under what scientists
> s commonly call pseudo-science.

Can you justify those claims about that specific article in some way?

James Watson

Posts: 2024
Nickname: watson
Registered: Sep, 2005

Re: experiments & dogma Posted: Jul 12, 2006 9:30 AM
Reply to this message Reply
> James Watson wrote
> > > Maintainability sounds like a bundle of a dozen more
> > > specific things.
> > Such as?
>
> problem recognition, admin delay, problem analysis, active
> correction, test, recovery ...

And how do you measure these things while eliminating (or minimizing) other factors (noise)?

> Maybe you meant something more specific by maintainability
> - maybe you meant Mean Time To Repair

I mean ease of enhancement.

> You seem to be ignoring that the article reports which
> results were statistically significant.

Where? I search the article for the term. Where do they show that 29 (heterogenous) projects are stasticially significant for all the factors they study?

> Yes the experiment provides evidence of correlation not
> causation - that's a big advance on not providing evidence
> of correlation or causation.

I'm not arguing that it's not useful. It's just not clear that someone else couldn't do a similar study and get completely different results. The point is that it's foolish to take things like this without considering other unquantifiable factors. For one, just because, on average, a certain factor improves success in one of these studies, it doesn't mean it will for any given project.

If the managers where I work were to respond to this study, they likely would report the 'official' success rate: close to a 100%. But in reality our systems are a mess and fail on an hourly basis. Many systems have never processed a transaction. I once had to rewrite major portions 'sucessful' project because it didn't work. It never worked and the users had reverted to a manual process.

> -snip-
> > I would say that its even more dangerous that it's easy
> to
> > fool onself into believing that something is scientific
> > when it is not as the above articles are not
> scientific.
> > These types of studies fall under what scientists
> > s commonly call pseudo-science.
>
> Can you justify those claims about that specific article
> in some way?

Where to start? There isn't much (if anything) scientific about it. It's not controlled. It's not repeatable. There's no hypothesis. Can you explain why you would think this is scientific?

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: experiments & dogma Posted: Jul 12, 2006 11:09 AM
Reply to this message Reply
James Watson wrote
> > Maybe you meant something more specific by
> > maintainability - maybe you meant Mean Time To Repair
>
> I mean ease of enhancement.

Which in some models of software quality would be a sub-characteristic of maintainability
http://www.serc.nl/quint-book/changeability.htm


> > You seem to be ignoring that the article reports which
> > results were statistically significant.
>
> Where? I search the article for the term. Where do they
> show that 29 (heterogenous) projects are stasticially
> significant for all the factors they study?

Search for the term "statistical significance" or "significant" or just re-read the article :-)


> I'm not arguing that it's not useful. It's just not clear
> that someone else couldn't do a similar study and get
> completely different results.

It's clear that someone else could do a similar study and get completely different results - and we call that science in action.

(It's also clear that you haven't provided an example of someone else doing a similar study and getting different results.)


> The point is that it's foolish to take things like this
> without considering other unquantifiable factors. For
> one, just because, on average, a certain factor improves
> success in one of these studies, it doesn't mean it will
> for any given project.

afaict that reasoning would also apply to any "personal theories" you might hold.


> If the managers where I work were to respond to this
> study, they likely would report the 'official' success
> rate: close to a 100%.

That's your speculation.

We can see that in the study, some projects reported a zero defect rate, and that the median was 7.1 reported defects per month per million lines of new code over first 12 months.


-snip-
> Where to start? There isn't much (if anything) scientific
> about it. It's not controlled. It's not repeatable.
> There's no hypothesis. Can you explain why you would
> d think this is scientific?

If it's not repeatable why did you write "It's just not clear that someone else couldn't do a similar study and get completely different results"?

Like most words science has several accepted meanings, you seem to have chosen a particularly narrow definition.

James Watson

Posts: 2024
Nickname: watson
Registered: Sep, 2005

Re: experiments & dogma Posted: Jul 12, 2006 12:20 PM
Reply to this message Reply
> James Watson wrote
> > > Maybe you meant something more specific by
> > > maintainability - maybe you meant Mean Time To Repair
> >
> > I mean ease of enhancement.
>
> Which in some models of software quality would be a
> sub-characteristic of maintainability
> http://www.serc.nl/quint-book/changeability.htm

This is the kind of bullshit I'm talking about. The measure is based on completely inconsistent and unreliable metrics.

How do you measure 'effort'?

> > > You seem to be ignoring that the article reports
> which
> > > results were statistically significant.
> >
> > Where? I search the article for the term. Where do
> they
> > show that 29 (heterogenous) projects are stasticially
> > significant for all the factors they study?
>
> Search for the term "statistical significance" or
> "significant" or just re-read the article :-)

Still don't see where the explain how 29 projects are a significant sample of all software projects. There explanation of how having all the projects come from a single comapny is OK is pretty weak.

Moreover, they use LOC as a measure of procductivity which is laugable at best. Defect counts are as much of a measure of the testing and tracking process as the software itself. Like you have said, lack of evidence doesn't prove something doesn't exist.

> > I'm not arguing that it's not useful. It's just not
> clear
> > that someone else couldn't do a similar study and get
> > completely different results.
>
> It's clear that someone else could do a similar
> study and get completely different results - and we
> call that science in action
.

If they get different results then neither experiment proves anything. That's science in action.

> (It's also clear that you haven't provided an example of
> someone else doing a similar study and getting different
> results.)

What's your point?

> > The point is that it's foolish to take things like this
> > without considering other unquantifiable factors. For
> > one, just because, on average, a certain factor
> improves
> > success in one of these studies, it doesn't mean it
> will
> > for any given project.
>
> afaict that reasoning would also apply to any "personal
> theories" you might hold.

What's your point?

> > If the managers where I work were to respond to this
> > study, they likely would report the 'official' success
> > rate: close to a 100%.
>
> That's your speculation.
>
> We can see that in the study, some projects
> reported a zero defect rate, and that the median was 7.1
> reported defects per month per million lines of new code
> over first 12 months.

And this proves only that the projects reported those numbers. Maybe this study is about the behavior of manangers? If a manager reports A there is a high correlation they will report B.

This goes right to the heart of my point. Because it's hard to quantify most things about software quality, people re grasping at straws to come up with metrics and then using these factors (that range from questionable to outright meaningless) and basing their decisions on them. I'm telling you that I work in a company where they do this and it's a disaster. We have all kinds of Six-sigma yellow-belts and black-belts and almost nothing they do contributes to actual success of the projects. They're interference cuases more problems than anything.

> -snip-
> > Where to start? There isn't much (if anything)
> scientific
> > about it. It's not controlled. It's not repeatable.
> > There's no hypothesis. Can you explain why you would
> > d think this is scientific?
>
> If it's not repeatable why did you write "It's just not
> clear that someone else couldn't do a similar study and
> get completely different results"?

Because the similar experiment would be similarly unrepeatable. Similar does not mean repeatable.

'Repeatable' has a specific meaning in science. I'm not really interested in explaining the scientific method to you and there are much better resources than me for learning about it.

> Like most words science has several accepted meanings, you
> seem to have chosen a particularly narrow definition.

What is the definition you are using in this context?

Flat View: This topic has 57 replies on 4 pages [ « | 1  2  3  4 ]
Topic: Programming as Typing Previous Topic   Next Topic Topic: Programmers Shouldn't Touch the Source

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use