Sponsored Link •
If processors and networks are getting faster, why do distributed computing at all? If we just wait, servers will be fast enough to do it all. I don't think so, and give some reasons why...
In response to my last posting, Berco Beute asked if faster processors, faster networks, and larger computer capacity allowed all clients to become essentially terminals, and all processing done on the server. Berco was thinking that this might lessen the need for mobile code (which it would), but the stronger conclusion is that this would mean that we really don't need to do distributed computing at all. All our computing can be done in one place, if we are just patient and wait for the machines to get large enough, fast enough, and the networks good enough to allow that sort of concentration.
If this were possible, it would certainly make programming easier...no more messy partial failures to deal with, for example. We get rid of the 7 (or 8) fallacies of distributed computing by simply getting rid of the distributed computing, or at least limiting it to the channel between the client (which becomes essentially a very smart terminal) and the server.
This is the sort of design center that Plan 9 (the Bell Labs system) had. Users would interact via terminals (that looked a lot like Blits) with servers that were stuck away someplace else. This is also a lot like the Sun strategy with SunRays and servers. It does simplify administration, and make programming easier.
But it isn't going to make the need for distributed computing go away. At best, it is a way of putting the problem off for a short period of time; at worst it is just pushing the problem back a level and giving us all an illusion which will bite us soon. The mathematics is simply wrong; looking at the trends reinforces the need for distributed computing.
The trends to look at are those described by Moore's law having to do with processors and the trends in network traffic (not speed). Moore's law, we all know, says that the performance of a processor doubles every 18 months (or that the price is cut in half for the same performance). The trend in network traffic, however, is that it doubles every 12 months (or less). So the increase in network traffic is outpacing the increase in processor performance, at the same time that competent processors are becoming cheaper (and therefore being placed out on the edges of the network cloud). It's just math, folks--the processors can't keep up.
This means that the need for distributed computing is going to increase, not decrease. And part of this need is that more and more different kinds of computing devices, from servers to cell phones to automobiles to refrigerators will be on the network. Humans won't be part of most loops (which is why I worry more about program-to-program distribution) and mobile code is going to be key (an assertion without proof in this log; that will be the subject later).
|Jim Waldo is a Distinguished Engineer with Sun Microsystems, where he is the lead architect for Jini, a distributed programming system based on Java. Prior to Jini, Jim worked in JavaSoft and Sun Microsystems Laboratories, where he did research in the areas of object-oriented programming and systems, distributed computing, and user environments. Before joining Sun, Jim spent eight years at Apollo Computer and Hewlett Packard working in the areas of distributed object systems, user interfaces, class libraries, text and internationalization. While at HP, he led the design and development of the first Object Request Broker, and was instrumental in getting that technology incorporated into the first OMG CORBA specification.|