Should individual human beings, and humanity's institutions, adopt the principles of contract programming, and use contract enforcement in their own functioning? Would that lead to a better world?
If for no other reason than I've been digesting too much right-wing media and too much left-wing literature of late, and at the same time working on a treatise on The Principle of Irrecoverability as part of a new, gargantuan, instalment of my and Bjorn Karlsson's Smart Pointers column entitled "The Nuclear Reactor and the Deep Space Probe", I had cause to wonder this morning whether humanity's accelerating decline could have been avoided, and might yet be arrested, if we used contract programming principles and practices.
Now I'm clearly not talking about people experiencing unrecoverable exceptions leading to process termination if they make a faux pas at a dinner party:
"Who's that hideous creature over there?"
"That's my husband/wife/mother/etc.!"
(Incidentally, though that's an urban myth, I have a friend who actually did this, to much consternation and posthumous merriment.)
But it seems we humans have parallels in our individual and collective behaviour to the neophyte programmer. When writing code, new programmers write what they think will work, expecting it to work. Contrast this with the seasoned programmer who writes out the interface, and then codes the implementation expecting it to fail, and who therefore codes in constraints and verifications (discussed at length in Chapter 1 of Imperfect C++ :-) ) to trap the failures.
Like the neophyte, human beings and, especially, human organisations, are almost entirely reactive when it comes to error. Consider the corrupt public sector worker who gets big bribes for being helpful to big business, the insider-trader, the quietly genocidal doctor, the truck driver pressed into amphetamine use to meet otherwise impossible deadlines who ends up causing carnage on the roads, airport baggage handling systems at airports that facilitate the unwitting use of innocent passengers as drug mules, the software engineer who checks things in without a care (or a unit-test, or even a successful compilation), peodophiles allowed in positions of power and influence over children, etc. etc. etc.
Each of these systems should have failure built-in and expected. But they don't. They're reactive.
Only when corruption of politicians reaches sufficiently juciy proportions are the individuals exposed; never the system questioned. (btw, one has to respect the power of human depravity to overcome all flavours of political set-ups: democracy, even the real kind (when it existed); communism, even the real kind (when it existed); autocracy; monarchy; etc. etc.)
Only when enough people have been crushed to a pulp inside their little worlds of tin does the issue of drug-use in truck drivers, caused by pressure from their employers, who are squeezed beyond rationality by the increasingly monolithic and powerful supermarket super-organisations, who are saving themselves the tiniest of margins to present to their single-minded shareholders to justify paying their top 2 or 3 executives 100s of times the salary of their pay squeezed employees, does the public, under the guidance of early-evening journtertainers, clamour for a fix to the system.
Only when the relative of a victim stands resolute against the coroner's acceptance of a rogue doctor's word, resulting in the revelation of hundreds of other premature 'assisted deaths', do health authorities enact checks and balances
Only when the savings of a government that has released vulnerable victims of mental illness into the communities who then proceed to forget to take their medicine and so relapse and start murdering members of the public do people question such policies for savings a few million dollars. (How many lawsuits does a few million dollars cover?)
Only when politicial systems - whether left, or right, or way out in the aether - lead to favour, corruption, poverty, murder, do questions get raised. And even then, all we get is a new flavour. (SSADM didn't make all programs go away. Ah well, that's because you didn't have OO. OO'll fix it. OO not the panacea? Well, component-based programming will sort it out. C++ not the best thing for everything? Worry not, Java is? Java's not the cat's miaow? Well, try .NET. blah blah)
And don't even get me started on Global Warming and Global Dimming. Double whammy!!
So here's a radical idea. Let's accept that underlying all differences in human institutions - political, religious, economic, social, commercial, etc - is the fundamental attribute of human nature: Optimistic Nonchalence. To be sure, this is both our strength, as it allows us to march forward in the face of adversity, complexity, futility and, er, death, and also our weakness, as it means we have, as we say in Australia, the "She'll be right" attitude that assumes everything's going to be ok and if it's not someone else will come along and fix it for us.
Let's accept that that's how we're evolved/designed, and do what the seasoned programmer does: expect failure. Indeed, we have a latent capacity for anticipating negative consequences; As any parent will tell you, every possible permutation of damage to one's children flits through the mind in every circumstance. Let's harness that, celebrate it, institutionalise it. This is not a negative/pessimistic thing - remember, the seasoned programmer expects to create successful components, but has the wisdom and experience to realise that this success is hard won. Code that contains enforcement of contracted behaviour is far more robust than that coded on a wing and prayer. Let's start putting the asserts into real life. Discuss....
Of course, I might just be completely nuts. It is very early in the morning. :-) )
Maybe it's due to historical conscience that all rationalistic meliorations of the human being, which were big on topic in the intellectual avantgardes of the first half of the 20-th century, grossly failed. The most visible example is/was the communist field experiment which was finalized with a cleanup that throws all economical resources back to mafiose clans and onto the capitalist market and the political resources into para-democratic regimes as in the contemporary Russia.
Programming mankind is hard especially when it comes to decisions.
I'm afraid that society is dissipative and does not run in a zero energy artificial and reversible universe with controllable debug sessions, bounds checking and stack trace inspections. It is a notoriously "bad defined problem" if You like the language of the digerati. Perhaps it is more interesting to reason why human societies do not tend to create an Utopian state of calmness, friendlyness and soft anarchy but tend to expand aggressivly and try to stabilize themselves at the edge of chaos, where they try to make progress and never look back. There is some freedom and risk in their behaviour that has nothing to do with the individual freedom of the liberal apostles which has more in common with "slave morals" in the Nietzschean sense.
The real world equivalent of an assert() or a unit test case is someone standing over your shoulder looking at every action you take. Due to overheads assuming incompetence is less efficient and also degrading.
The best thing is probably to be wary until you can be sure that the person is competent and then after that point you don't have to worry as much.
You build trust in people, if they betray that, then you can be wary again (or forever). Human already do this, she'll be right.
When using asserts we basically say that we do not fully trust the user, world input etc. but we implcitly put our full trust in the compiler/program to faithfully execute the assert whenever some of our conditions fail.
In human based systems, the overseer is a human too, and as has been mentioned, even if you have someone looking over your shoulder (what will that do to productivity?), that's a human too.
The main problem in redesiging public human systems (for lack of a better term) is how to make the comptrolling/monitoring system more objective and robust to power corruption. If you think about it, it is kinda like a circular dependecy, if you give monitoring power to one agency over another, you may need another to monitor the first, etc.
However, planning for the worst was formally defined in the Minmax theorem (http://en.wikipedia.org/wiki/Minimax) about 100 years ago, and I agree that many or most human public systems can use an injection of such rule changes.
Game Theorists could probably come into very good use in redesiging public systems.