Sponsored Link •
How can we see the problems that we face when building software through new eyes?
(This started as a reply to John Camara in the previous blog entry, then took on a life of its own.)
I believe by now it is safe to assume that the majority of software developers and managers of software projects have come to accept the importance of testing so I feel it's unnecessary to comment on testing.
I've learned not to trust the fact that there's a lot of noise about something. It usually doesn't correlate to reality. A friend works for a company that's just now learning about unit testing, and apparently testing in general. I've consulted with companies where testing is still a new thing. My guess is that the majority of software developers have not yet accepted the importance of testing, and that it's only the noisemakers on the leading edge who have been learning and talking about it, and thus giving the impression that it's now well accepted.
Another example of noise vs. reality: people are always saying that there are still more COBOL and FORTRAN programmers out there than any other kind. But when was the last time you saw one, much less talked to one? By that metric, they don't exist. But apparently there are tons of them.
Now as important as it may be to have a second pair of eyes on a problem I feel it's not the most important benefit of code reviews. I feel that code reviews provide a means of mentoring each other.
Yes, it's one thing to show my examples and talk about how you would, in theory, use a particular language feature properly. But when you work with code that someone actually have a vested interest in, then it becomes real and important. (I create toy examples in books out of necessity only).
I think that the more abstract the concept, the more important it is to work with a project that people are actually trying to build, to take it out of the realm of ideas. For example, when teaching OO design (which is more abstract than programming), I encourage people to bring their own designs, so we can work on them together. This not only brings the importance up, but it also makes better use of the training or consulting time, because people can actually gain forward motion on their projects.
This form of mentoring is likely to be the only form of mentoring that the majority of developers experience these days. After all, mentoring has lost most, if not all, priority in these sad times of ever decreasing costs at all costs. We have simply forgotten how important it is to pass collective experiences from generation to generation.
This may come from (perceived) efficiency considerations. Mentoring on a regular basis may appear to be just a cost for a project -- interference with getting things done. Whereas carving out a week for training between projects is a discrete chunk of time, you do it and you're done and then people can get back to slinging code as fast as they can. Or any number of other scenarios.
I think the problem is that while many programmers understand that programming happens in the mind, and the code itself is just an artifact of the process, outside the field, the code looks like it's what you're doing. (An understandable perception, since the code is the core deliverable). So if it's about the code and not the mental process behind the code, it makes sense that you would do whatever you can to produce the code as fast and as cheaply as possible, and to discard anything that appears to hinder the creation of code. From this follows the logical ideas that coding is typing, so you want to see people typing all the time, and that 10 people can type faster than one person so if you can hire ten Indians cheaper than one US programmer, you're going to get a lot more typing done, so you'll get big economic leverage.
In reality, study after study indicates that success comes from who you hire. This suggests that programming is not a mass-production activity where programmers are replaceable components.
I think a good analogy is writing a novel. Suppose you want to create a Stephen King novel (I'm not much of a fan, but this is exactly the kind of book that publishers stay up nights trying to figure out how to mass produce). You could say, "A book is made up of words, and words are created by typing, so to create a book we need to get a bunch of people typing. The more people we can get typing, the faster we'll create a book. And the cheaper the typist, the cheaper it will be to create books."
It's hard to argue with that logic. After all, a book is made up of words. And words are created by typing, etc. But anyone who reads novels knows that there must be a fundamental flaw in the logic, because there are authors whom you like and others whom you can't stand. The choice of words and the structure of the book is what makes the difference, and that is based on the person writing the book. We know you can't replace one author with 10 lesser writers and get anything like what the author could produce, or anything you'd want to read.
Another example is a house. Like software, it's comprised of subsystems that fit together. Like software, you have a bunch of people working on it, and it's even true that some of those people are replaceable. It doesn't really matter who is nailing up the wallboard. But you really notice the design of the house, and you notice how well it was put together, and those things are determined by the architect and the builder.
I've been struggling with this general problem for a long time. That is, the "logical" arguments that are very hard to refute, like "software is created by typing." True on the surface, but not really the essence of the issue. But if you keep the argument on those terms, you can't really get anywhere, because the logic is irrefutable. Even if that logic completely misses the real issue.
This is probably why I keep fighting with the static-dynamic language debate, because it has the same feel to me. You can come up with all kinds of reasons that static checking is a good thing, until you have an experience programming in a dynamic language where you are vastly more productive than with a static language. But that experience defies the logic used to back up the reasoning behind static languages.
Here's another one. I believe that "details matter," and that noise really does wear you down (studies show that noise makes you tired). What I'm talking about here is visual and complexity noise. So I was disappointed when, for example, Ruby turned out to have begin and end statements, and that it uses "new" to create objects. These are all noise artifacts from previous languages, required to support their compilers. If your language creates all objects on the heap, you don't need to say "new" to distinguish between heap and stack objects (like you do in C++, which was mindlessly mimicked by Java). And everyone always indents their code, so you can use indentation to establish scope. Besides the fact that I'm justifying the design minimalism of Python here, when I put these ideas out I will probably get a lot of perfectly reasonable rationalizations about why this is the best way of doing things. And without questioning the fundamental principles upon which those arguments are founded, those arguments will be pretty airtight, even if they really come down to "I'm used to that and I don't want to think differently about it."
Java has always required a lot of extra typing. But the fact that Eclipse and other IDEs generate code for you seems to justify enormous amounts of visual noise, and for those in the midst of it, that's OK, and even desirable. "It's clearer because it's more explicit" (Python even has a maxim "Explicit is better than implicit"). This is even taken to extremes with the idea, supported by a surprising number of folks, that every class should have an associated interface, which to my mind makes the code far more complicated and confusing. Which IMO costs money, because everyone who works with that code must wade through all those extra layers of complication.
All of this detail costs time and money, even if you have a tool generating a lot of code for you. But if you're in the middle of it, it's all you can see and it makes sense because it seems to work. And of course, if you compare one Java project to another, you aren't questioning the cost of using the language.
In contrast, when I teach OO design, my favorite approach is to (A) work on a project that the client is actually working on and (B) move quickly through the design process and model the result in a dynamic language (Python is what I know best). In most cases, the client doesn't know Python, but that doesn't matter. We still very quickly get a model of the system up and running, and in the process we discover problems that our initial design pass didn't see. So because of the speed and agility of a dynamic language, design issues appear early and quickly and we can refine the design before recasting it in a heavyweight language.
And I would argue that if the initial code is done in the heavyweight language instead, then (A) There is resistance to putting the design into code because it is much more work intensive; it isn't a lightweight activity, and (B) There is resistance to making changes to the design for the same reason.
And yet, I will probably get any number of perfectly reasonable arguments to the effect that this approach doesn't make sense. I usually find that these arguments are not based on experience, but on logic that follows from fundamental assumptions about the world of programming.
It may not even be possible to prove things logically when it comes to programming. So many of the conclusions that we draw this way appear to be wrong. This is what I like about the book "Peopleware," and also "Software Conflict 2.0" that I'm now reading. These books point out places where we operate based on what seems perfectly logical, and yet is wrong (one of my favorite studies in "Peopleware" shows that, of all forms of estimation, the most productive approach is when no estimate at all is made).
The story that I heard about the Greek Natural Philosophers (what we call physicists today) is that they were more interested in the arguments about how something worked than they were about how that thing actually worked. So they didn't drop small and large stones in order to find out if one fell faster than they other, they argued based on their assumptions.
It seems to me that we're in the same situation when we try to argue about programming. A large part of the Enlightenment came from the move to the scientific method, which seems like a small, simple step but turned out to be very big, and to have a very big impact. To wit, you can argue about how you think something will happen, but then you have to go out and actually do the experiment. If the experiment disagrees with your argument, then you have to change your argument and try another experiment.
The key is in doing the experiment, and in paying attention to the results, rather than starting with belief and trying to wrestle the world into line with that belief. Even after some 500 years, human society is still trying to come to terms with the age of reason.
I think the essence of what the agilists are doing is a perfect analogy to the discovery of the scientific method. Instead of making stuff up -- and if you look back at all the "solutions" we've invented to solve software complexity problems, that's primarily what they are -- you do an experiment and see what happens. And if the experiment denies the arguments you've used in the past, you can't discard the results of the experiment. You have to change something about your argument.
Of course, you aren't forced to change your argument. But even if it doesn't happen overnight, those that look at the experiments and realize that something is different than the way they thought it was, those people will move past you and forge into new territory. Territory that your company may not be able to enter if they refuse to change their ideas.
|Bruce Eckel (www.BruceEckel.com) provides development assistance in Python with user interfaces in Flex. He is the author of Thinking in Java (Prentice-Hall, 1998, 2nd Edition, 2000, 3rd Edition, 2003, 4th Edition, 2005), the Hands-On Java Seminar CD ROM (available on the Web site), Thinking in C++ (PH 1995; 2nd edition 2000, Volume 2 with Chuck Allison, 2003), C++ Inside & Out (Osborne/McGraw-Hill 1993), among others. He's given hundreds of presentations throughout the world, published over 150 articles in numerous magazines, was a founding member of the ANSI/ISO C++ committee and speaks regularly at conferences.|