> Quality is context-dependent. Long ago I programmed video > games for the Atari 2600. We did things like jump into the > second byte of an instruction just to save a ROM byte. > With a very limited platform (128 bytes of RAM, 4K ROM in > cart, no interrupts, no image buffers, etc) tricks like > this made the difference between making the game possible > or giving up. The context for typical applications today > is quite different, so a different approach to quality is > appropriate. > That's funny, I wrote some assembly language on Atari too, but it was the Atari 800. I can remember doing all kinds of tricks such as, if I remember right, dynanmically replacing whole sections of code with other code. I felt good about it at the time, but later learned that software engineering frowned on that sort of thing, that code was supposed to be inviolate and the only thing the program should manipulate was data. Though it just occurrs to me now that perhaps that was my first flirtation with metaprogramming.
> Quality is subjective. We can all agree that bugs are bad, > but what about naming conventions, commenting style, > maintainability or extendibility? Great software has been > created using all kinds of styles and conventions (even > with mixed styles). Often goals are contradictory: adding > extra abstraction may make the code easier to extend but > may also create a greater learning-curve for more mundane > maintenance. It's fine to say "do it right the first > time", but the problem is there is no "right" way. > I think it is much easier to say, "do it right the first time," than actually do it, not because you're under pressure, but because if this really is your first time, you just don't have enough experience to know how best to do it. I think the most important asset you need to do a good design is domain experience.
> I'm not sure about the technical definition of "debt", but > the fact is that all code constrains you in some way. > You're not going to choose between debt and non-debt > software. At best you get to choose how much debt and what > type of debt you want to live with. > I think we know it when we see it. Certainly different people have different ideas, but each of us have that little voice that says, "you know, you probably shouldn't duplicate code here in the name of expediency, but..." or whatever rule it is we're violating.
> To a great extent your choices are a gamble on the future > and more abstractions won't always win the day. For > example, if Artima had started in the pre-Internet era as > a BBS, it's not likely that the much of that code would be > useful for the artima website no matter how well designed > it was. > More abstraction is definitely not my idea of better quality. We try very hard to only abstract to the extent it helps us do what we know we need to do today.
Your comment about the BBS reminded me of Luke Hohmann's comments about the cost of change jumping up when you encounter requirements that weren't forseen and therefore weren't accomodated by the architecture:
Luke Hohmann: For example, when marketing says that in addition to the heavy desktop client you currently support, you also need to support Palm computers, guess what? Your cost of change curve just flew up high again. These cost of change jumps are usually correlated with significant architectural infrastructure change, because you don't have any infrastructure for your tests. You don't have any infrastructure for your database. You're probably learning something new in the development team. Your developers don't know how to program Palm. Are you going to throw out all the people or keep them? If you keep them, they've got to learn it. They don't know the idioms.
Think of all of the infrastructure that helps keep your cost of change curve low, which you can get to not only in XP but also in other methods. In your existing system, you have all your tests and your test database. Your build infrastructure is all set. Your documentation infrastructure works. You've figured out your naming conventions and tagging conventions. You've figured out how to link your help file to your online documentation. You have all that figured out, and the cost of change curve is low, because a big part of the high cost of change is creating this infrastructure, of figuring this stuff out. Now you don't have all the infrastructure for Palm support, or whatever the new requirement is, so your cost of change curve jumps again. I applaud the concept of the flattening cost of change curve, because it's right. You should be able to achieve that flattening after the first release or two. It's just not true for a mature product over its entire lifecycle.
I definitely do not believe in speculative design. It is just too hard to predict the future, and too expensive to implement anything more than what you actually know you need. Better to apply resources in less speculative ways and deal with the future when it lands on your doorstep.
"That's funny, I wrote some assembly language on Atari too, but it was the Atari 800. I can remember doing all kinds of tricks such as, if I remember right, dynanmically replacing whole sections of code with other code. I felt good about it at the time, but later learned that software engineering frowned on that sort of thing, that code was supposed to be inviolate and the only thing the program should manipulate was data. Though it just occurrs to me now that perhaps that was my first flirtation with metaprogramming."
Well, my point was that sometimes following good software engineering principles results in failure. The 2600 wasn't exactly a study in good hardware engineering either, but it had a price point it had to meet and in so doing required that games written for it be highly compressed and in some cases maximally real-time (that is, some software timing had to be accurate to 1 CPU cycle - the smallest time interval software can control). This was quite different from the Atari 800 which had at least an order of magnitude more resources and handled the real-time issues in hardware.
Anyway, the point is that we didn't do it to be cool, we did it because we had to.
In my haste to comment on the issues surrounding Atari 2600 programming I failed to mention that I actually thought your response was quite good overall.
Peter Hickman: <If you don't do it now it will never get done>
For me that about sums it up. It's the Pragmatic Programmers' 'broken windows' rephrased, but it's important enough to repeat as often as you can, in as many ways as you can.
> Is it just me or is the current debacle in Redmond over > the new version of Windows a perfect example of technical > debt that has overwhelmed a project.
Possibly, but without reliable information, who knows? It may simply be that the development deadlines were unrealistic. I'd be very reluctant to use Microsoft as an example of bad practice since in most cases their practices are no different to anyone elses and most attacks on MS are just vacuous noobie flag waving.
> I think it's pretty well established that the Microsoft > method has been to shoehorn new features in and get the > product out of the door as soon as possible.
Naturally. Microsoft, like all successful companies, is a sales driven company.
I've read > and interview with a former MS project leader that not > only admitted as much but espoused it as an effective > method that MS has put to use.
Ditto.
Specifically mentioned was > the trouncing of Netscape.
A lot of history is being rewritten about the Netscape/IE battle. The fact is that Netscape gained ground while they had a better product and lost it when they didn't. Did MS deliberately use their commercial and financial advantage to produce their own better version? You bet they did. They're a commercial company and Netscape isn't a charity. But the base line is the Netscape lost - in this case - because the product lost.
There is one code quality tool whose use used to be widespread but which pretty died out (in my experience) about ten years ago. It requires no installation and provides an ideal mechanism for disseminating good practice across a team. It's the code walk-through.
Maybe I'm just showing my age but there was a time in my company when no code could be signed off until a walk-through had been conducted with either two or three other developers. Of course, it had disadvantages such as organising and finding other participants who had any interest in the details of your code, etc. but it certainly concentrated the mind, particularly if you had organised a time and had included someone who knew their stuff better than yourself.
I know these days we have pair programming that should - in many ways - cover the same ground but I still think something was lost when walk-thoughs died out.
Last time I proposed a walk-through of my code, all I got was some stares of disbelief and questions along the line of "Why? What's wrong with it?"
> A lot of history is being rewritten about the Netscape/IE > battle. The fact is that Netscape gained ground while > they had a better product and lost it when they didn't. > Did MS deliberately use their commercial and financial > ial advantage to produce their own better version? You > bet they did. They're a commercial company and Netscape > isn't a charity. But the base line is the Netscape lost - > in this case - because the product lost.
Well, let me make myself clear, the interview I read claimed that while Netscape was rewriting their browser from scratch, IE came out with a bunch of new features and are Netscape's lunch. Whether that is really the case is for the historians to decide, it's fairly irrelevant to the point I am trying to make.
Please realize that this is not about bashing Microsoft in any way. This issue of 'beautiful code' is not just a discussion between developers. There have been very powerful external force working against great code and the MS example is just the most universally understood example that I can come up with. Management and consultants have been pushing this philosophy that great code doesn't matter for years. My point is that it does and it's my contention (from reports I have read on the situation) that this is one of the major reasons they are struggling.
I understand why people want to shy away from any discussion that might be considered 'bashing' MS because where it often leads. But we are all adults here. (right?) Can't we have a rational discussion about the 800 pound gorilla in the room?
> I'd be very reluctant to use Microsoft as an > example of bad practice since in most cases their > practices are no different to anyone elses and most > attacks on MS are just vacuous noobie flag waving.
I'd be very reluctant to use Microsoft as an example. Microsoft build large software systems.
"There are hundreds of successful ways to build small applications, but only a few ways to build large systems successfully."
p431 Software Assessments, Benchmarks and Best Practices Capers Jones, 2000.
> I'd be very reluctant to use Microsoft as an > example of bad practice since in most cases their > practices are no different to anyone elses and most > attacks on MS are just vacuous noobie flag waving.
I fail to see why their code practices being the norm makes them a bad example. I think it makes them a rather good example.
> I'd be very reluctant to use Microsoft as an example. > Microsoft build large software systems.
> > > I'd be very reluctant to use Microsoft as an example. > > > Microsoft build large software systems. > > > > And how is that pertinent? > > The problems faced building large software systems are not > "the norm".
So they don't matter? I would say the larger the system the more important the quality of the code. That's my experience.
Isn't this related to the last 10% takes 90% of the time? When you get near the end, you're basically doing maintenance. This part of your project is an indication of how easy or difficult maintenance is going to be in the future. I still say it's because you can't see the big picture and the network of interaction between all the objects. I still fail to see how you can maintain something you can't see.
> > The problems faced building large software systems are > > not "the norm". > > So they don't matter? I would say the larger the system > the more important the quality of the code. That's my > experience.
They matter if you are building large software systems. If you are not building large software systems you do not face the same problems.
Capers Jones made a career measuring software organisations - I have no reason to disbelieve his conclusions.
> They matter if you are building large software systems. If > you are not building large software systems you do not > face the same problems.
That is a completely flawed conclusion based on poor logic. That small projects don't face the same set of problems as large projects doesn't mean that there are no common problems between large projects and small projects. It's plain to see that there are common issues.
That all men are not English doesn't imply there are no English men. That's equivalent to what you are trying to argue.
> Capers Jones made a career measuring software > organisations - I have no reason to disbelieve his > conclusions.
I'm sure Capers Jones is a really smart guy. Unfortunately he has not joined in on this discussion as of yet. Has Capers Jones stated that the intersection of the problems facing small projects and the problems facing big project is an empty set? If so, I'd like to know exactly when my project will reach the critical size where our esisting problems will dissappear and a completely new set appear in one instant.
> > They matter if you are building large software systems. > > If you are not building large software systems you do > > not face the same problems. > > That is a completely flawed conclusion based on poor > logic. That small projects don't face the same set of > problems as large projects doesn't mean that there are no > common problems between large projects and small projects. > It's plain to see that there are common issues.
If there are common issues between large projects and small projects, why would you choose to study those common issues in large projects where there are additional (dominant) problems, rather than studying them in small projects where they stand alone?
> That all men are not English doesn't imply there are no > English men. That's equivalent to what you are trying to > argue. > > > Capers Jones made a career measuring software > > organisations - I have no reason to disbelieve his > > conclusions. > > I'm sure Capers Jones is a really smart guy. > Unfortunately he has not joined in on this discussion as > s of yet. Has Capers Jones stated that the intersection > of the problems facing small projects and the problems > facing big project is an empty set? If so, I'd like to > know exactly when my project will reach the critical size > where our esisting problems will dissappear and a > completely new set appear in one instant.
Happily, Capers Jones has published widely, for those who are willing to learn from others experience.
Flat View: This topic has 97 replies
on 7 pages
[
«
|
1234567
|
»
]