Apparently Einstein went to his grave believing that "God does not play dice with the universe," that the Heisenberg uncertainty principle was wrong, and that there was just some other variable that we weren't taking into account that would explain everything.
This was called the "hidden variable theory." Numerous subsequent experiments have shown that there is no hidden variable and every event really does have a probabilistic component.
In the case of software development, there really are extra variables, but they are not hidden. We just ignore them. These variables, called "people," are unpredictable only if you are searching for a methodology that excludes them, which many people are. What is the intent of the methodologist? I think it is to create a formula that works regardless of the individuals involved. The great wish of managers is that programmers can be treated as interchangeable parts, because otherwise the company seems to be completely at the mercy of chance as to whether a project will be a success or not.
I think the problem with this, and with many issues in computing, is that of deterministic thinking. Which seems like a logical conclusion to draw; after all, we are in the realm of binary "yes" and "no." But there seems to be some kind of uncertainty principle at work -- as systems get larger and more complex, we move out of the realm of "yes" and "no" into the world of "maybe."
But the promise always seems to be there, just beyond our reach: there should be a magic formula that will allow us to deterministically control the outcome of a project. And as long as we are pursuing a deterministic solution, we are unable to consider the possibility that there is none, or at least that this is not the most productive path. And we certainly cannot admit that the road to success may be primarily in the realm of human interactions, and that successful projects will stack the odds in their favor by hiring the "best" team possible, where the meaning of "best" varies with far too many variables to control, and so can be quite different for different situations.
The hardest thing to admit, I think, is that software development is the complete opposite of an assembly line, but is far more of an artistic endeavor like writing a novel or performing a play. Or even painting. It's as if we completely skip over the important details of the activity, saying, "Painting means applying paint to a surface. So I can achieve the same effect with a spray gun on a barn as Monet did when he applied paint to a surface using his paint applicators." And after all, we are just manipulating bits, so it seems a logical conclusion to draw, other than it doesn't seem to work very well.
And as long as we cling to these thoughts of determinism, we are blinded to potentially better approaches. It remains very difficult to let go and say, "What if it's not possible to control everything? How can we push things in our favor anyway, and work within those constraints?"
I think this is a two fold issue. One is the programmer as a resource and second is the computer as the programmer's resource. While the first is well established in most fields, two levels of resource management is not well established in the programming field.
Here's what I mean. In construction, if you hire contractors, any large project requires you to hire an architect to create a plan for the changes you want to do. In this way, you can easily replace workers if they start to do an incorrect job and you have the architect that can check the work.
None of this exists for programming. In programming, the programmer has to also organise his or her workers... the machine. Most programming languages involve telling the machine what to do. This makes it platform dependant. It doesn't matter what language you use. The point is that they all do things differently. There is no plan (there will be arguments on this, but this is just denial of the current state in computing). The programmer usually has some really high level ideological "plan", but when it comes to implementation, it's everyone for themselves. Sure, the interfaces may be defined, but not the implementations. So you can't replace the programmer because someone else will have a ramp up period to come up to speed on what this programmer was doing. But if this programmer had a "plan", then there'd be no issue just as in the case of construction contractors.
The problem isn't whether or not programmers are replaceable. It's about whether or not you're able to design software without telling the machine what to do just as an architect doesn't tell what the workers are supposed to do on the blueprint. The machine is able to tell how to convert this plan into software just as a head construction contractor can figure out how to get a plan done without the direct help of the architect. This would mean that it would no longer matter what machine you're on either. The computer will use whatever resources are available to it in order to fulfill the plan. Unfortunately, this is 10 or 20 years in the future. We don't have anything like that right now. And frankly, I doubt many programmers would like the thought of not being critical to the job anymore.
I still see a lot of denial in looking at other fields for tips on how to better produce software. The extra variables you speak of are in the programmers when they should be in the plan. That's why replacing programmers becomes difficult the more work they produce.
I think this argument is kind of reminicent of "Peopleware" by DeMarco and "Rapid Development" by McConnell - both are something of a classic these days because they brought the emphasis back to people away from stastics and "one size fits all" software development methodologies.
Yes, people and teamwork are part of the equation and the primary source of non-determinism. Some people think it's a problem some people don't. I prefer to go with the latter. Programming is a creative process, much more so than putting a building together, IMHO. This makes for some spectacular successes and some spectacular failures. This is also what makes it enjoyable in the first place. Every project is different, there's so much that can go right and so much that can go wrong. Even if you put the people with a "proven" track record together there's still no guarantee they will work together effectively.
I can imagine this is not good news for project managers who want a deterministic result every time without having to think along the way. I am also sure there are plenty of people out there who developed a second nature for this kind of things. They know when the trouble is coming and they know to look for more than the "number of lines of code per developer". They also know that things can still go wrong and you just have to keep an open mind about and learn from it.
It is equally dangerous to believe that human interactions is in the realm of magic and not something that can be better understood and supported.
The idea of "hiring the best people and get out of their way" typically emphasises the wrong part. That is, the talent myth. I say the more important part is "get out of their way" (and also doing that in a particular way) and this is demonstrated by what happened at NUMMI.
A quite natural approach to approximate the "hidden variables" of SW construction is measuring and evaluating human capabilities. Human capabilities are separated into their diverse aspects and get measured objectively. Finally a project manager decides what people are best qualified in a concrete projects situation and performs a social synthesis. Hence the solution of the "software crisis" might be social engineering? Fire your process managers and hire a team of psychologists and mental sports-trainers.
The romantic discourse on "people" and their fuzzy, monlinear, mysterious, artistic qualities, the tales of famous great hackers and collective poetry and other folklore might be superseeded by a rational (re-)construction of individual qualities and their team synthesis. Companies with good personality diagnostics always succeed. "People" are first of all addressable resources and material within an organization. This might not necessarily contradict with their narcistic self-images of unique personalities and their fear of being easily replaceable. The new project management is well adviced not being too troublesome to their employees with kinds of "Neutron Jack" attitudes. Good people will continue prefering not to work for assholes. Hopefully, metrics will show this as well.
> <p>This was called the "hidden variable theory." Numerous > subsequent experiments have shown that there is no hidden > variable and every event really does have a probabilistic > component.
IANAP, but I am not persuaded reality is 'random'. The fact that propabilities fall within a specific range (for example when projecting a photon through a hole) means that it is something the experiment is doing that affects the outcome. We may not have hidden variables, but the universe may be interconnected in such a manner that 'exciting' one node also excits the neighbouring nodes (of course that's intuition, not a proof by any means).
And what about quantum entanglement? if there are no hidden variables, then how does quantum entanglement work?
> <p>I think the problem with this, and with many issues in > computing, is that of deterministic thinking. Which seems > like a logical conclusion to draw; after all, we > <i>are</i> in the realm of binary "yes" and "no." But > there seems to be some kind of uncertainty principle at > work -- as systems get larger and more complex, we move > out of the realm of "yes" and "no" into the world of > "maybe."
The reason we go from 'yes/no' to 'maybe' is that the logic of systems is not formally verified. People can only hold a certain amount of logic diagram inside their heads...if the logic diagram becomes too big, then we need machines to check the logic.
But checking the logic of large Turing-machine programs is not feasible yet due to machine constraints.
And then you have the halting problem where you can not prove if and when an algorithm terminates, and therefore certain parts of reality can only be proven if executed.
> IANAP, but I am not persuaded reality is 'random'. The > fact that propabilities fall within a specific range (for > example when projecting a photon through a hole) means > that it is something the experiment is doing that affects > the outcome. We may not have hidden variables, but the > universe may be interconnected in such a manner that > 'exciting' one node also excits the neighbouring nodes (of > course that's intuition, not a proof by any means).
And that's where you are going wrong. It is bascially impossible to have lived your life in our world and have an intuitive understanding of quantum mechanics. It defies intuition.
> And what about quantum entanglement? if there are no > hidden variables, then how does quantum entanglement > work?
Unless you have studied and worked in theoretical physics for more than a decade, you are out of your depth. But for fun, here's a idea: extra spatial dimensions that we cannot detect.
It's also an interesting example because Einstein observed that quantum mechanics predicted this phenomenon and then used it to try to 'disprove' quantum mechanics because it seems to violate his own theories of special and general relativity.
> > And that's where you are going wrong. It is bascially > > impossible to have lived your life in our world and > have > > an intuitive understanding of quantum mechanics. It > > defies intuition. > > Not intuitive understanding...a hunch if you like.
The problem is that there is an immense amount of reproducable empirical data that shows that there are no hidden variables. The computer you are using works based on quatum mechanics. Quantum mechanics can predict results with extreme accuracy. More so that classical mechanics or relativistic effects from what I am led to believe. I find that more convincing than a layman's hunch.
> > Unless you have studied and worked in theoretical > physics > > for more than a decade, you are out of your depth. But > > for fun, here's a idea: extra spatial dimensions that > we > > cannot detect. > > Aren't those 'hidden variables'?
I guess you could call it that but that's not really what Einstein meant. He meant that there are extra properties which (if known) would eliminate the uncertanity and make results purely deterministic. The existence of extra spatial dimensions might explain how information can travel faster than the speed of light but doesn't remove any of the randomness from quantum mechanics.
> The main thesis is that laws that we consider fundamental > may just be collective effects of action at smaller > scales. Hidden variables may just be cases where > reductionism fails.
Well, let me clarify what I have already stated. It's possible that hidden variables might exist but they are not detectable and are not required to produce accurate predictions and therefore are not part of science (at this point in time.) In other words, they could exist but they are not part of our experience i.e. our 'reality'.
1.) About philosophy: I agree with Einstein in that “God does not play dice”, but maybe there are “hidden variables” that are impossible (or very difficult) for us as humans to measure. In that case a probabilistic approach sounds pretty good. It might be the closest we can get to “ultimate reality”.
2.) About methodology: Methodologies try to reduce risk by “managing” changes in technology or requirements. Thinking that methodology will make life easier by eliminating change is wishful thinking. Software built that way is often either obsolete and redundant before it is finished or it just generates useless work with each change. The only thing that the extra work can potentially buy is a better understanding of the system from “outsiders” by viewing the system from different angles. But most artifacts (which will get lost or are out of date in about one month from being built) are for third parties (e.g. managers) that end up not reading them anyway. The exception to that is when methodology is an effective communication tool between people. Sometimes the method chosen happens to be precisely how the people in the project like to communicate with each other, most of the time it isn’t.
3.) About people: our professor of software engineering would often jokingly say: “Don’t expect any big advances in software engineering as long as us, the computer scientists are doing the research. We will see some breakthrough once we get more anthropologists and sociologists interested in the field.”
Flat View: This topic has 68 replies
on 5 pages