I'm sitting in a panel discussion here at OOPSLA - how to deal with ultra large systems. Funny thing happened as soon as we got to questions. The panel topic is based on a study that posited an "unbelievably" large system - something with, say, a billion lines of code. Ralph Johnson popped up and mentioned that he's aware of at least one such system (all in one language) out there:)
So that forced a redefinition, to "bigger than anything other than perhaps the entire internet" - i.e., something for which there are no real specs, no one is in charge, etc. Getting more specific, it's being asserted that we can't really handle the building of new systems that are as big as, say, 50 million lines of code - such a system is unlikely to succeed, and it's unlikely that anyone will really understand the system.
Here's an interesting assertion - large systems might be compared to some large urban infrastructure: trash collection, sewage, water, etc - there's some basic stable infrastructure on top of which emergent systems end up getting built.
It was probably inevitable that the financial bubble/system was brought up as an example of an ultra large scale system that was not well governed. The real question is, if no one understands a system, how do you effectively regulate it? That's not really an assertion about the financial system at present, more of a general question. Good point as I was pondering that - the lack of transparency was probably the biggest issue, and that's the biggest problem in any such system.
There's an interesting question: do we need to see some really big system fail in a big way before we can draw proper lessons from it? Some of the panel seems to think "yes", but, I'm way less sure. I think a lot of the responses to the financial meltdown was very ad-hoc and panicky, so I'm not sure that an equivalent level software failure would be dealt with any more rationally. I liked the fact that Dave Thomas (Smalltalk Dave) was skeptical, too.