> To turn things back around somewhat, what happened to the > write documentation first, then write the source from the > documentation? If something needs to be added (and who > knows of a program where it doesn't?) then document it > first?
You haven't specified the detail of the documentation so any response is going to be based on the reader's assumptions of what you mean.
But to give an example where I think the above doesn't make any sense, I can take an example from my current work. One group I work with requires that all designs be documented to a level of detail such that 'any developer can code from them'. In other words, these documents basically contain the same level of detail as the code itself. These documents also take roughly the same amount of time to write as the code itself. Effectively, this means the code is written at least twice: once in Word and once in the target language. Any changes compound this duplication of effort.
While there is value in the document in that it's easier to read than the code, especially if you aren't familiar with the language, most of the useful detail could be generated from the code automatically.
Basically what you are suggesting is exactly what a lot of shops force on their developers. And yes, you can do it that way but you are going extremely inefficient (i.e. slow). Where I work, this problem manifests itself as unhappy customers. Their expectations and needs are not met by this approach. Often the final product arrives long after it is no longer needed.
> Also, how does this contrast with the "only document > what's needed" mindset that most programmers end up in, > because there's literally no time to do anything else? At > what point does practicality kick in?
What's the argument against "only document what is needed?" What's the value in writing documents that are not needed?
Flat View: This topic has 38 replies
on 39 pages