The Artima Developer Community
Sponsored Link

Computing Thoughts
Wrong Correctness
by Bruce Eckel
December 31, 2009
Summary
Malcom Gladwell's latest book is a selection from his New Yorker columns. The underlying theme is what I call "wrong correctness," which is fascinating because there are enormous possibilities to be mined, but only if we can learn how to create a new approach to business.

Advertisement

We love our models. Create a model and you can cheaply predict outcome, without actually doing the experiments. They are magical windows into the future.

We often think of models in terms of mathematical equations, but any kind of representation is a model. "The map is not the territory" is one of the more succinct descriptions of the disconnect between the model and the thing that it represents.

Every model is an abstraction. This means that some information is removed and/or lost in creating the representation. We attempt to abstract away only the pieces that don't affect the results we seek, and in doing so we assume that the system is made up of discrete, isolated components that can be added and subtracted with little or no impact on the rest of the system. Indeed, the concept of reductionism is itself a model, that says, in effect, that the pieces are more important than the interactions that form a whole (although I suspect it started as "let's see how far we can take this idea" rather than an expectation that this is how the world works. Only subsequently did people adopt reductionism as an accurate world view).

Ironically, a model becomes a problem when it starts to work enough of the time that you begin to believe in it. You start to see the model as the world, and it becomes annoying and time consuming to constantly remind yourself that "all models are wrong, some are useful." In fact, humans are too limited not to see the world through our abstractions. If, every time we had to make a decision, we went back to first principles and took everything into account, we'd never do anything.

Newton's laws of motion provide an excellent example. Within our realm of perception these are absolutely "true" and accurate, all the time. And yet, they are only an approximation. When you start looking at the very big, very small, very fast, etc., they don't apply anymore. But it makes no sense to take the more complex factors into account when we live well within the limits of the approximation. So we abstract away the extra bits because they have virtually no impact in our realm. Unfortunately, we tend to forget the approximation and assume that our model is the real thing. Therein lies the danger: not approximation, but ignorance.

Abstraction Pitfalls

Models are so enticing. When you get one that works, it gives you a tremendous advantage: cheap predictability of results. It is very tempting to create a model even when it's not possible, because you want one, and you want the outcome it promises.

My father spent many years in the "charting" movement, trying to understand what a stock was going to do by looking at the curve of what it had done before. There's an entire mythos around the idea that there are patterns in stock charts that will tell you what they are going to do. Numerous mathematical studies have shown conclusively that this doesn't work. The experience of the chartists themselves has shown it; the only people who make any money from this movement publish newsletters. But the idea that such a model would work is so compelling -- it would make you rich -- that people build a religion despite consistent disproof.

An even more insidious problem happens when a model is possible, perhaps one that produces very limited results, but it requires very complex information in order to produce more valuable results. It becomes too expensive or you just can't figure out how to do it, so you decide to just ignore that term. The classic example of this is the cost-benefit analysis. You want numbers to produce a conclusion, so if something doesn't produce numbers, or when those numbers are too difficult to collect and quantify, the easiest thing to do is just not to include that factor. Customer satisfaction, arguably the most important value that any company needs to consider, is often left out of a cost-benefit analysis because it's too hard to include. Ditto employee satisfaction and employee effectiveness.

Predictability

One big flaw with models is that they assume the future is predictable. In particular, they posit that the way things are going now is pretty much how they will go in the future. This is one of the more comforting whispers you can speak to important portions of our brains. Those portions stand up and say "Yay! Sounds good to us!"

The other parts, the ones that say "something bad might happen," are shushed. After all, most of the time, things are comfortably predictable. And you never know when something bad is going to come flying out of the blue, so what's the point of dwelling on it? On the personal level, at least, a cheerful outlook sounds like pretty good advice.

To top it off, figuring out what to do about the random catastrophes is far from simple. A model that predicts a steadily-increasing stock market tells you what to do: invest and wait. One that says that every once in a while you'll encounter an unpredictable shake-up doesn't give any clear direction. It doesn't tell you when these things will happen, so there's no buy/sell prescription. Mostly it counsels keeping some of your money (enough to survive on?) somewhere safer, and only risking what you can live without.

The essays in Gladwell's Book "What the Dog Saw" suggest that just because something is unpredictable doesn't mean we should ignore it -- there is still value, sometimes exceptional value, from factoring in the chaotic.

Risk Management

That doesn't mean it will be easy or obvious. Sometimes "factoring in the chaotic" simply means observations, and adjustments to our predictions. In the software field, consider Waltzing with Bears, by Tom DeMarco and Timothy Lister. Their previous book, Peopleware, was a relatively easy read because it gave sound evidence that those management practices that you already knew were dumb were, in fact, dumb -- and often more expensive than anyone credited. But in Waltzing with Bears, they move away from the obvious and into the arena of risk management. Here, it's not about what we know will happen, but about what usually won't happen. Many people get away with saying "it won't happen, let's ignore it" most of the time. Sometimes they're even rewarded for being positive thinkers.

DeMarco and Lister first point out something very important. When someone asks you how long a particular subproject will take, it's usually implicit, and sometimes explicit, that they want to know the shortest, most optimistic time for this task. DeMarco and Lister note that the actual time for finishing a task is a probability curve, and if you only ever give the shortest time, you are giving the leading edge of the curve, where it touches the axis. Thus, each subtask prediction has a 0% probability of being correct. This means your project completion time estimation starts out, from day one, with a 0% probability of being correct. They suggest a relatively simple change in behavior: give, instead, the middle of your probability curve for each subtask, so you begin with a palpable completion time. It doesn't make the completion time predictable, but it does make it significantly less wrong.

This is a big shift in perspective. All this time we've been doing project estimation quite badly. We are pressed into doing this. This pressure comes from our basic business model, which says that money is the only reason for doing anything. We optimize around money, and so naturally when we ask for a project estimate, we want the most optimistic one, the one that appears to cost the least. But if we look at it realistically, we see that (1) you can't know how long something will take, you can only guess, and (2) a collection of best-possible guesses produces a useless estimate.

In Peopleware, DeMarco and Lister also look at estimation, but for its effect on productivity. They run a test where managers and programmers estimate the completion of a project, in various combinations: the manager alone, the programmer and manager together, the programmer alone, and no estimate at all. The rather striking result was that the programmer was most productive when there was no estimate at all. So not only are we estimating very badly, the cost of estimation itself appears to be quite significant. Of course, current business thinking will look at these issues and say "very interesting, but we must have estimates and naturally we want the most optimistic ones."

This is the same business thinking that ignores hard facts in favor of myths and reflex reactions. An excellent example is pay-for-performance. Watch this TED talk by Dan Pink: 40 years ago, a seminal study showed that pay-for-performance only produces improvements in rote assembly-line-type work. For any work that involves creativity, pay-for-performance actually decreases productivity; apparently it demeans people to think that their creative work is only evaluated in terms of money (in the programming profession it's relatively well-understood that, as long as they can get by OK, programmers don't care that much about the money -- it's the quality of experience that matters). The negative impact of pay-for-performance has been emphasized in all the important business books of the past decades, the books that all the business leaders profess to read and agree with. And yet the only reward these same business leaders can think of is money, so they do exactly what has been shown again and again to produce negative results.

Wrong Correctness

This is what I mean by "wrong correctness." Somehow the behavior makes sense and, like many of the stories Gladwell tells in his books, practicing that behavior doesn't produce a big, catastrophic failure. If you fall off a cliff, you learn fast that walking on air doesn't work. But if you only occasionally stumble and slide partway down and can climb back up with only a few scrapes and bruises, you can convince yourself that this path is a reasonable one, that we can just man up and push through and we don't have to look for a better, easier path.

In Outliers, Gladwell looks at how disasters happen. It's never one big thing, but a combination of small, seemingly mundane and manageable mistakes that, taken together, produce a crash. One of Gerald Weinberg's maxims is "Things are the way they are because they got that way, one logical step at a time." Each decision is a small one and appears to be logical in isolation (there's that reductionism problem again), especially if you base your decisions on what you want to believe, or what is convenient to believe, rather than looking at experiments (this is not to say that I believe in all experimental data, just that taking an experimental attitude is more likely to produce better results).

Here's another example of "wrong correctness," also from Peopleware: the Furniture Police. This is the team in a company that decides what furniture you can have, and how much should fit on a floor, etc. From the standpoint of the Furniture Police, the more people you can squeeze onto a floor of a building, the better. And the only metric they have for measuring their success is how much money they save. So they do the thing that is correct for them under their constraints, and cram people closer together, and show positive results through lowered costs.

The actual effect is very, very wrong. It greatly reduces job satisfaction and thus productivity. It appears to save the company money but the amount it actually loses vastly outweighs the tiny savings. Of course, you'd have to look at the big picture to see the loss, rather than the quarterly report where the furniture police seem to produce a win.

Notice the trend. We do these things because the small decisions seem simple and relatively obvious, and in the short term they appear to make the numbers jump in the right direction. Kudos all around, now let's see what we can do for the next quarter. And when the pressure of the long-term trend eventually bursts the dam, everyone is confused and runs around desperately trying to figure out how to fix things -- but of course, the only solutions that make sense are short-term quick fixes.

So is it any wonder that, as companies get bigger, their productivity per worker goes down and down, until we get Microsoft, with billions flowing through it and amazing profitability and lots of smart people, unable to create anything new? For years and years? This is what happens when you accrete lots and lots of wrongly-correct practices. At some point you start going backwards.

Myopic Short-Term Thinking

The marshmallow experiment demonstrates that children who understand deferred gratification tend to be much more successful later in life. A study of successful people shows again and again the need for patience and perseverance. Even those who appear to become successful overnight turn out to be preparing, watching and waiting, typically for years, so they are ready when the right opportunity appears.

But it's as if we are a nation of five-year-olds, who only understand instant gratification. We don't want to hear about the years of preparation. We don't want to know the backstory, we only want to hear about the sudden fireworks and imagine ourselves magically walking into the same situation and suddenly being wealthy (which will apparently suddenly make us happy -- another thing we want instantly without any long-term investment).

Even if we do manage to create a human-centered company, it's only a matter of time before the incessant demands of the quarterly-profit beast erode these values. The only (rare) exceptions occur when the creators make up-front decisions to prevent such erosion from taking place; staying private, maintaining controlling interest, or creating employee/customer-owned cooperatives. Of course, such organizations don't have the potential of growing cancerously fast. To create and maintain a business like this requires strong and experienced leadership in the face of questions about optimizing growth and profits.

Human Resources are our Most Valuable Commodity

Steve Blank tells a story that's been repeated in many forms: the seemingly small, one-logical-step-at-a-time event that makes the key players look up and notice that the company has just gone from sweet to sour. In this case it is the slightly-comical decision by a new CFO to stop providing the human resources with free soda, which was costing the company some 10K/year. An easy and rational call, which made the CFO look like a go-getter. The key engineers, once sought avidly by the company, quietly announced their availability and began disappearing. The company didn't panic because it had already gone through its change of life and become more important than its pieces; it was no longer an idealistic youth who valued things like people and quality of life. It had grown up and matured and was now in the adult business of making money. Workers had become fungible resources, easily replaceable.

I remember the first time I saw this happen, in the second company where I had a "real job" after college. I'm not sure what the inciting incident was, perhaps the 3rd or 4th business reorganization within a couple of years, perhaps a sudden withdrawals of bonuses and raises. Whatever the case, a number of the engineers that I considered to be extra-smart began quietly disappearing, with the company making no-big-deal noises as this happened. My own direct manager left, which should have been cold water in my face (but I typically have to learn things in the hardest possible way, and this lesson was -- eventually -- not lost on me).

When did we decide that we were no longer "personnel" (which at least sounds personal) but instead the resources that are human? To the MBAs that probably came up with it, it was certainly the next logical step in fitting everything into a spreadsheet: we've got machine resources, building resources, manufacturing resources, etc., etc., and human resources.

It's the term everyone uses these days, without thought. But recent experiments with Émile Coué's theory of autosuggestion show that repeating something to yourself has an effect whether you believe what you're saying or not. Coué came up with "every day in every way, I am getting better and better." What do you suggest to yourself every time you say "I am a human resource?"

Gladwell tells the story of outstanding college football quarterbacks, the majority of whom are abject failures in professional football -- because the game is played entirely differently in the two domains. Thus, you cannot predict the success of a quarterback based on their success in college. Later in the book, he looks at the way we interview prospects for jobs. It turns out the most critical point of the interview is the initial handshake (or other initial impression). If you like the way someone shakes hands, you take whatever answers they give you and adapt them to that first impression. It's basically a romantic process, except with a real romance you decide the outcome after many months, whereas with a job interview you decide after only hours -- or actually in a moment, with the initial handshake. Even our lame attempts to simulate "real" work (by asking programming puzzles, for example), tell us nothing about the truly critical things, like how someone responds to project pressure. We suffer from Fundamental Attribution Error -- we "fixate on supposedly stable character traits and overlook the influence of context," and we combine this with mostly-unconscious, mostly-inappropriate snap judgments to produce astoundingly bad results. Basically, we think that someone who interviews well (one context) will work well on a task or in a team (a completely orthogonal context).

The answer is something called structured interviewing, which changes the questions from what HR is used to -- questions where the answer is obvious, where the interviewee can generate the desired result (not unlike what we've been trained to do in school) -- to those that extract the true nature of the person. For example, when asked "What is your greatest weakness?" you are supposed to tell a story where something that is ostensibly a weakness is actually a strength. Structured interviewing, in contrast, posits a situation and asks how you would respond. There's no obvious right or wrong answer, but your answer tells something important about you, because it tells how you behave in context. Here's an example: "What if your manager begins criticizing you during a meeting? How do you respond?" If you go talk to the manager, you're more confrontational, but if you put up with it, you're more stoic. Neither answer is right, but the question reveals far more than the typical interview questions that have "correct" answers.

Customer Support is a Cost Center

Studies show again and again that repeat customers are your best source of business. And again and again, companies start looking at the cost of making customers happy in the same way they look at free sodas for employees: "hey, here's some fat that can be trimmed." It's a perfect example of wrong correctness and short-term thinking to say that we can cut back on customer support because it doesn't contribute to the bottom line, which is defined as sales for this quarter. Somehow everyone gets on board with these cost-cutting measures, because it seems so obvious. And often, at the same time, these same folks are saying that yes, repeat customers are very important. Except that it's so easy to make this quarter's numbers look better by doing some quick cutting. You end up cutting something that has taken years to develop, just for a quarterly bump. It's a bit like saying "I could lose 30 pounds overnight just by cutting off my leg!" Oh, sure, when I put it like that it sounds deluded. But how different is it, really?

A horrible customer support experience isn't an accident, it's a brilliant money-saving strategy for the company. And once you've reduced everything to quarterly profits, it's the only logical strategy. To do anything else requires a fundamental shift in perspective and company structure (the very shift I'm interested in). Sure, Apple could do a better job, but they have obviously decided that customer experience is what the company is about. Things should just work, and if they don't you should have a clear path to a solution. Who even thinks about calling Microsoft? Microsoft wins by saving money. Except that they are so out of touch with their customers they don't know what to make next. And more and more of my friends, long-time Windows users, are happily defecting to Apple (and try going to a conference filled with "developers, developers, developers!" They're almost exclusively Macs these days).

You know when you've found a customer-centered company (not the ones who put it in their mission statement because it sounds good, but the ones who actually do it). Trader Joe's and Costco come to mind. The experience is instantly good, and there's no sense of hidden traps waiting to spring when something goes wrong (health insurance and cars come to mind). Very quickly, you're thinking "I'm coming here from now on!" It's what most companies want, but don't have the patience for.

Does "Why?" Matter That Much?

The list of examples of wrong correctness goes on and on. I'm sure I could write a book exclusively on the ways that we screw up. I have a reading list of books describing why we make these bad decisions. But I think that people who have spent any time in business have been personally frustrated by enough of these mistakes to know it's an overwhelming problem.

I do think "why?" matters, but I also see it as an endless recursive hole; I could easily spend the rest of my life becoming an expert on why people persist in turning their businesses into hellholes.

In the end I'm not so interested in understanding why we go wrong as much as discovering ways to inspire us toward naturally better decisions. In the same way that an open-spaces conference guides us to spontaneously create the best possible conference experience, I believe that there is some structure that will guide us to spontaneously create the best possible business experience (and for the rest, those not quite ready to jump in completely, make them question the knee jerk addition of structures "because that's the way you run a business").

That's what I'm working on now. It's very ambitious, but it's the only thing that I find compelling: completely change our experience of work and business to make it happiness instead of drudgery, in the same way that open-spaces make conferences wonderful instead of an effort (for both organizers and attendees). I know I can't do it by myself -- I need to find the right community-building tools so that lots of ideas can appear and flow (I imagine some kind of web-based conversation, along with in-person events like open-spaces conferences and workshops). I don't want to "own" the result, in the same way that Harrison Owen didn't try to "own" open spaces. I just want it to happen, so we can stop cramming ourselves into this small, dank, oppressive space that we've been calling business and instead venture into a big world of ebullient possibility, measured by creativity, self-expression, productivity, and joy.

Talk Back!

Have an opinion? Readers have already posted 35 comments about this weblog entry. Why not add yours?

RSS Feed

If you'd like to be notified whenever Bruce Eckel adds a new entry to his weblog, subscribe to his RSS feed.

About the Blogger

Bruce Eckel (www.BruceEckel.com) provides development assistance in Python with user interfaces in Flex. He is the author of Thinking in Java (Prentice-Hall, 1998, 2nd Edition, 2000, 3rd Edition, 2003, 4th Edition, 2005), the Hands-On Java Seminar CD ROM (available on the Web site), Thinking in C++ (PH 1995; 2nd edition 2000, Volume 2 with Chuck Allison, 2003), C++ Inside & Out (Osborne/McGraw-Hill 1993), among others. He's given hundreds of presentations throughout the world, published over 150 articles in numerous magazines, was a founding member of the ANSI/ISO C++ committee and speaks regularly at conferences.

This weblog entry is Copyright © 2009 Bruce Eckel. All rights reserved.

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use