3 Laws of Software Complexity analagous to 3 Laws of Thermodynamics.
Matt Quail has blogged about a first law of software complexity. It's an analogy of the First Law of Thermodynamics, also known as the Law of Energy Conservation. The analogy Matt is that without radical changes in abstractions, a complex system will remain complex. That is complexity is conserved.
I replied in his blog about The Second Law of Thermodynamics, in that closed systems tend to maximum entropy. Entropy means maximum disorder and if you can make the inference that disorder implies complexity. Then a evolving software system without significant external contributions tends towards entropy. In otherwords, to avoid entropy you've got to be constantly adding energy, and in information terms this means organized knowledge. So you have to keep on refactoring, reducing disorganization into organization.
Keep in mind for both laws, when we talk about a closed system, we don't mean static and unchanging. There's a Zeroth Law of Thermodynamics, namely thermal equilibrium. Well I've got a Zeroth Law too for complexity. Software systems tend towards equilibrium with their environment. In otherwords, in a real world environment where change is unavoidable then a system is under constant pressure to require change.
So in summary, the Laws of Sofware Complexity are:
Laws of Software Complexity
Zeroth Law. Change Equilibrium - Change is Unavoidable
First Law. Complexity will be Conserved - Incrementally changes do not change inherent complexity
Second Law. Software Complexity tends to Maximum Entropy - Aggressive refactoring tends to slow down that tendency.
So in summary, the Laws of Sofware Complexity are: <snip/> Pretty, cool don't you think?
Um, well, comparing software to some physical science, such as physics, is not new. The notion of conservation of software complexity is well known.
Still, it's nice to see people discuss this, but I think Matt misses the larger issue. Knowing that complexity is conserved, the question is Where to put it? Who pays for it? In Windows (theoretically), the complexity for end-users is reduced at the expense of those writing the OS (more or less). In most Linux distros, more complexity is pushed back at the user, leaving a more open and (perhaps) simpler system.
I've seen many systems that are hard to configure because the developer wanted it to be easier to write the code. Or systems that don't play well with others because the complexity was pushed out towards the API.
It's like a water bed; if you push down in one place, it pops up in another. App design requires a deliberate decision about who must live with what degree of complexity, and how systems are partitioned to keep complexity from spilling over to other areas. OOP, message-oriented programming, and service-oriented architecture are some of the ways people try to address this.
Most important, though, is seeing that these sorts of analogies are themselves good examples of leaky abstractions. Over-indulgence leads to the same sorts of probems we see, for example, from people who think of the Internet as a highway, or as a printing press, or as TV.
Overplayed comparisons can obscure seeing things for what they really are.
Um, you got to lighten up a little bit! Abstraction not only helps people think, but help people to communicate concepts. Sure, there's some leaky abstractions here and there, but sometimes you can't spend all your time looking at trees, there's a forest out there too.
I know about forests and trees, thanks, and I appreciate a good analogy. But they're tools. What's critical is the insight they provoke, and I just didn't find that much in the web log you referenced or the subsequent discussion. No big deal. Maybe there's more to it and it just wasn't expressed very well.
I must confess, though, that a sure way to turn me off from something is to describe it as "cool," which is perhaps the linguistic equivalent of MSG.
(We may need some Laws of Blogging, perhaps something that forbids the use of "cool", "sucks", and any reference to Dave Winer, at least until What Isn't Called Echo Anymore is stable. )
(Do I need to add a dopey smiley face idiogram now?) )
PS: In my previous post I wrote 4+ moderate paragraghs, and your response is to tell me to lighten up, and then explain what abstractions are and why they're a Good Thing? Disappointing. I'm not trying to be snarky here, but if you disagree with something I said then there have to be better ways to tell me where you think I'm wrong.
Or maybe I'm mistaken about the purpose of the comments section.
Out of curiosity, is there a standard way of measuring complexity, I wonder?
Seems to me that complexity is inversely proportional to the amount of hiding done by a system. This fits in with the fact the central ideas of OO seem to revolve around one idea: minimize ripple effects by keeping details secret. The more things are hidden, the more intelligible a system becomes. Bizarre, is it not? It's a paradox.
One could define metrics which measure how much hiding is going on in a program. - SCOPE: number of private methods (hidden to user of the class) versus the number of non-private methods - SCOPE: number of package-private classes (hiddden to user of the package) versus the number of public classes - POLYMORPHISM: number of generic interface references versus number of concrete class references - INDIRECTION: number of repeated uses of the same literal ("magic numbers" versus symbolic constants)
Tesler's Law of Conservation of Complexity: You cannot reduce the complexity of a given task beyond a certain point. Once you've reached that point, you can only shift the burden around. (Larry Tesler)
I'm currently faced with havng to reduce the entropy in some very badly written software. My theory is that entropy is being conserved by increasing the amount in my brain.
> Out of curiosity, is there a standard way of measuring > complexity, I wonder?
There is a vast swathe of matherial out there on this subject. One of the most common used to be Function Point Analysis. We used it for a while at my company but, to be honest, found the subject so subjective (so to speak) that it created lots of hot air but little useful information.
The best analogy for complexity measurement I heard was that it was like estimating the cost of building a house by counting the number of electrical power points in the building plan. (So an average 1 bedroom flat might have 9 sockets and an average 3 bedroom house might have 18 sockets.)
What does this tell you? Although it's possible to translate the complexity of a system into a (more or less) simple measure of some or other of its properties, you end up with a result that is vague and/or specialised to the point of meaninglessness. So you need to make your complexity measurement more complex...