Donald Knuth, author of The Art of Computer Programming, and a pioneer in the analysis of algorithms, was interviewed by SD Times' Andrew Binstock. In the interview, Knuth shares his views on XP, unit testing, code reuse, multi-core architectures, and the future of software development.
SD Time's Andrew Binstock recently interviewed Donald Knuth, among the pre-eminent computer scientists of all times, a pioneer in the modern study of algorithms, and author of The Art of Computer Programming (TAOCP). In the interview, Knuth talks about a wide variety of topics, including the latest installment of his TOACP on combinatorial algorithms and Boolean functions.
The most interesting parts of the interview relate Knuth's views on several current issues in software development. One such issue is the question of whether one should design a computer program first and then build it, or whether the current trend of the test-code-execute cycle is more effective:
The idea of immediate compilation and "unit tests" appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works and what doesn’t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up..."
Knuth also believes that developers should think through their programs before writing any code, and disagrees with most tenets of "extreme programming:"
With the caveat that there’s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I’ve ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other’s code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me...
In another segment of the interview, Knuth, designer of the MMIX computer architecture, discusses his views on multi-core processing:
I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won’t be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Titanium" approach that was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write...
How many programmers do you know who are enthusiastic about these promised machines of the future? I hear almost nothing but grief from software people, although the hardware folks in our department assure me that I’m wrong. I know that important applications for parallelism exist—rendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc. But all these applications require dedicated code and special-purpose techniques, which will need to be changed substantially every few years... Even if I knew enough about such methods to write about them in TAOCP, my time would be largely wasted, because soon there would be little reason for anybody to read those parts...
So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it’s a pipe dream. (No—that’s the wrong metaphor! "Pipelines" actually work for me, but threads don’t. Maybe the word I want is "bubble.")
Knuth has for a long time advocated the concept of literate programming, and on this topic he notes that:
Literate programming is certainly the most important thing that came out of the TeX project. Not only has it enabled me to write and maintain programs faster and more reliably than ever before, and been one of my greatest sources of joy since the 1980s—it has actually been indispensable at times...
In my experience, software created with literate programming has turned out to be significantly better than software developed in more traditional ways. Yet ordinary software is usually okay—I’d give it a grade of C (or maybe C++), but not F; hence, the traditional methods stay with us. Since they’re understood by a vast community of programmers, most people have no big incentive to change, just as I’m not motivated to learn Esperanto even though it might be preferable to English and German and French and Russian (if everybody switched)...
Jon Bentley probably hit the nail on the head when he once was asked why literate programming hasn’t taken the whole world by storm. He observed that a small percentage of the world’s population is good at programming, and a small percentage is good at writing; apparently I am asking everybody to be in both subsets...
Finally, Knuth also shares his views on code reuse:
I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace...
What do you think of Knuth's take on currently fashionable trends?
I would be interested to hear him flush out a little more about Extreme Programming. Does he believe iterative development is bad? Does he believe tight feedback loops are bad? Or are there other aspects he doesn't like? It's interesting because it sounds like his own writing projects take an iterative approach, but I can't tell if he believes writing a program is significantly different to warrant an entirely different approach.
I took his comments on multi cores to be cautionary about not looking too hard for a problem to fit a new tool. Given his experience in looking over algorithms he probably has a good intuition about what problems are helped by parallelism. (Frankly, I'm more intrigued by clockless cpu's than by multi cores.)
Now I'm interested in reading up on Literate Programming.
> <p>So why should I be so happy about the future that > hardware vendors promise? Whether you are happy or not there is no alternative in sight if you want more power than they can provide in a single core.
>They think a magic bullet will > come along to make multicores speed up my kind of work; I > think it’s a pipe dream. (No—that’s the wrong metaphor! > "Pipelines" actually work for me, but threads don’t. Pipelines provide diminishing returns with greater pipeline depth. I don't expect to see any silver bullets that make harnessing multi-cores easy either. We can expect tools and techniques that make it a little less hard.
> > <p>So why should I be so happy about the future that > > hardware vendors promise? > Whether you are happy or not there is no alternative in > sight if you want more power than they can provide in a > single core. >
I read him as being sad. Reading between the lines, his point was that Intel has a near (or may be a bit more than that) monopoly on architecture. Even if some hardware geeks came up with a new architecture (instruction set) that gets more work done with the available transistor budget and clock as a single cpu, it would never get a foothold due to the Wintel situation. This is much like what happened to mainframes in the late 1960s.
The RISC thing has come and gone. We're left with an architecture that has reached its level of incompetence.
At risk of upsetting some readers, there is one problem not mentioned in the interview that does benefit from parallel execution in multi-[thread|core|processor] machines. I'll note that DB2's latest (off mainframe) version is now fully threaded. Set processing is inherently faster than sequential processing. Set thinking is not common, nor relevant in many arenas. But where it is, it loves such machines.
For general purpose processors in say 5 years time I think multi core is inevitable, the only real question is how many cores. Will a typical desktop have under 10 cores or many more? Quad cores a getting common now, and if we fold in the GPU it seems plausible that 100+ core cpus might be the future. Not a very attractive prospect if you are uncomfortable with multithreaded programming.
Some of my work has hundreds of tasks that could be run concurrently.
In my view, literately programmed code should be visible for every programmer through its evolution, so literate programming doesn't cope with OCP well. I guess it is why he prefer "re-editable" code to reusable black box code. But I think benefits of information hiding should not be sacrificed.
About Multi-Core; I think that perhaps it could be a jump in paradigm terms, but in real concurrent processing, in an electronic sense. As an analogy, I think it could be possible to implement a sort of parallel logic circuit over multi-core machines, something like electrical parallel circuits do. Going further in my ScFi, I can think in a sort of "resonant logic" over parallel processing. (ahhh, the voices...)
> For general purpose processors in say 5 years time I think > multi core is inevitable, the only real question is how > many cores.
Maybe Intel is just dead wrong about customers who are willing to pay extra money for n cores without the need of utilization and clever hardware vendors will sell much more in the low cost segment with now state-of-the-art dual cores. So you might be correct about the technical trend but Knuth might be right as well that it will be mostly irrelevant and a big failure.
If you ignore the GPU, two cores seems to be more than adequate for most uses now and for some time to come. However if/when AMD/Intel start merging the CPU and GPU what will the total core count be and will all cores be the same or will capabilities differ (as already happens with the Cell systems). Taking best advantage of these systems could be even more challenging than simple symmetrical multicores (noting that a number of existing multicores already have NUMA).
> I would be interested to hear him flush out a little more > about Extreme Programming. Does he believe iterative > development is bad? Does he believe tight feedback loops > are bad?
I don't think he's against iteration per se. I think that the problem is much of Extreme Programming is actually Experimental Programming, where you 'design' a system by repeatedly coding stuff and going back to the customer and saying "Is this what you want?". Tight feedback loops can be a symptom of (or an excuse for) not doing enough homework up front to have confidence in what it is that you're trying to develop.
it's on the market, so it's real. but except ps3 developers at sony and few universities no one is really promoting programming 8-cells. today games are using a fraction of its potential and the universities are trying to do a research what language and what methods could be useful for such architecture. from this point of view, it's academic even it's on the market already.
> "Extreme Programming is actually Experimental Programming, > where you 'design' a system by repeatedly coding stuff and > going back to the customer and saying "Is this what you > want?". " > > that's anarchy :)
Well it's a lot better than developing something blindly and then telling the customer "this is what you want" when in fact, it hardly ever is.
Flat View: This topic has 38 replies
on 3 pages