In Part 1 of this series, I demonstrated two approaches to defining the interaction between a client and a news feed server -- one involving documents and protocols and the other involving objects and interfaces. Now I'd like to compare their relative merits. First I'll look at the advantages of objects compared to documents, and then I'll cover their disadvantages.
One obvious difference between the object and document approaches is that developing news feed protocols involves network handshake and data-model design, whereas developing the news feed API requires object-oriented design. The object approach's object-oriented nature enables client and server programmers to work at a higher level of abstraction compared to the protocol and document approach. To explain what I mean, I'll relate a story about Bjarne Stroustrup, the creator of C++.
I first learned C++ in early 1992 from Borland's "World of C++" videotape, which begins with a short introduction by Stroustrup. He begins by saying that the main goal of C++ was to raise the level of abstraction in programming. As I watched the videotape, I was able to grasp the object-oriented concepts and C++ syntax presented, but I was puzzled by Stroustrup's comment. I wasn't sure what he meant by "raising the level of abstraction," and why he thought it was desirable.
In April 1999, I attended a small C++/Java conference in Oxford, England, at which Stroustrup gave a keynote. In the keynote, he claimed that the two ideals for programming should be: keep independent things independent, and work at the highest level of abstraction you can afford.
After the keynote, I asked Stroustrup why he felt working at as high a level of abstraction as possible was a fundamental programming ideal. He answered that it allows details to be hidden, which makes code shorter. "The more details, the more mistakes," he said. Plus, code size impacts maintainability: the more code you have, the harder it is to maintain. He also mentioned that code at higher levels of abstraction is amenable to tools that analyze and optimize. He summarized by saying that code written at a higher level of abstraction is easier to understand, write, and maintain.
I believe one advantage of the object approach is that programmers get the benefits, outlined above in Stroustrup's comment, of working at a higher level of abstraction. To a great extent, these are the same benefits reaped by programmers when they switch from procedural programming to object-oriented programming. When you write a client program that interacts with a server via an object sent across the network, you reap the benefits of object-oriented programming in your distributed system.
One of the benefits of object-oriented programming, for example, is its clean way of separating interface and implementation. One of the main strengths of Jini's architecture is that the network protocol by which a Jini service object talks across the network to a server is part of that object's implementation. Although a client and server need to agree on a network protocol to get the first object sent from server to client, once the first object is in place, client-server communication can take place via object interfaces.
Unlike an object, of course, a document cannot make a socket connection back across the network. Thus, for clients to interact with the documents they receive from a server, the client and server need to agree on a protocol as well as the document's data model. For example, if a user types information into a form contained in an HTML document, and presses Submit, the Web browser opens a socket to the Web server, and does, most likely, an HTTP POST that contains that information. The Web server processes this POST and returns another HTML document. If a client program wishes to interact with the server via documents, rather than just silently consume documents, the client and server must agree on protocols beyond the initial protocol that gets the document to the client.
Once a Jini service object has been downloaded into a client, by contrast, the client and server need only agree on that object's interface. The network protocol, which is part of the service object's implementation, can be different for different service vendors, and can change over time for the same vendor. (Note that because the service object is sent across the network to the client from the service provider, it will always be up to date. It will always be compatible with the service provider -- and will talk the correct network protocol.) As the network is one of the fussiest and most dynamic parts of the hardware being programmed, it is useful to be able to choose the most appropriate protocol for each situation.
Another basic difference between the object and document approach is the nature of the contract between the client and server. For network-mobile documents, the protocol (including the data models of any documents) forms the contract between the client and server. For network mobile objects, the object's interface forms the contract. One important advantage of objects is that it is easier to evolve the object's contract over time compared to the document's or the protocol's contract.
Coming up with a good design, whether you are designing an API or a DTD, is difficult. Figuring out, agreeing upon, and communicating the design requirements can often be a significant challenge. Another challenge is actually figuring out how best to model the functionality (in the API case) or the information (in the DTD case) that the requirements demand. On top of all that sits another challenge, perhaps the most difficult of all: predicting the future.
Requirements evolve over time. As a result, a design's ability to gracefully accommodate future changes is usually an important quality. But how do you achieve such a design when you don't know what those changes will be? Mark Johnson, JavaWorld's XML columnist, discussed the difficulty of anticipating future evolution in XML DTD designs:
It's difficult to get a DTD right because it's difficult to get a data model right, and a DTD is a data model. The hardest thing about getting a data model right is that data models evolve. There's "right" for today, and "right" for tomorrow. But you don't know what's "right" for tomorrow up front, so you have to guess. Data modeling is, in part, the art of including enough generality to anticipate future schema changes without kicking the model up to such a high level of abstraction that it's inefficient or incomprehensible.
Fortunately for API designers, Java pays close attention to the need to evolve contracts. The little read "Chapter 13: Binary Compatibility" of the Java Language Specification describes all the ways a Java class or interface can change without breaking binary compatibility with client code that was compiled with an earlier version. Similarly, the Java Object Serialization specification provides mechanisms that enable serialized objects to be exchanged between parties that know about different versions of the object's class. Java's support for versioning of classes, interfaces, and objects helps make it easier to evolve requirements over time. XML, by contrast, does not offer much help in the face of evolving requirements. On the topic of evolving DTDs, Rusty Harold, author of XML: Extensible Markup Language (IDG Books, 1998) and creator of the XML developer Website "Cafe con Leche," (Resources) said this:
XML does not provide any explicit versioning support. You have to do it all yourself. However, [if you write a DTD in 2000 that validates year 2000 tax return XML documents] it is fairly easy to write a new DTD in 2001 that will validate both 2000 tax returns and 2001 tax returns. Writing that DTD in 2000 or 1999 is a little harder.
Typically, designers of data models leave room for future change by specifying in the initial data model some abstraction that can be ignored if not recognized. For example, the Java class file format (a data model for defining Java types) includes a data entity called an attribute. Each attribute in a class file has a name that indicates the attribute's structure and semantics. The original Java virtual machine specification defined the structure and meaning of several named attributes, and identified the attributes that Java virtual machines are required to recognize and use. The specification also stated that:
The attribute abstraction of the Java class file format enables the class file data model to evolve such that class files that contain new attributes remain backwards compatible with old virtual machine implementations that don't recognize the new attributes. Designers of an XML DTD or any other data model can incorporate an abstraction similar to the class file's attribute to make their data models more accommodating to future changes in requirements.
Although dealing with evolving requirements can be a challenge when you are working on a monolithic application, it is far more difficult when you are working on a distributed system. In a monolithic application, you can adopt a new contract that is incompatible with the old and update all parties to the old contract to understand the new one. For example, in a monolithic Java application, you could break an interface's contract by changing the name of a method in that interface. If you recompile the entire application, however, the compiler will find all the code that was broken by your change. If you fix all those areas of the code so that they use the new method name, your application will once again compile.
In a distributed system, on the other hand, you often don't have the option of updating code that is broken by a change to a contract. In a public system, such as Moreover.com's news feed, you don't control all of the system's pieces. In fact, you don't necessarily know who controls all the pieces. But even if you controlled all of the distributed system, it may be physically impossible to update all pieces quickly enough to satisfy the users.
Generally, in distributed systems, once you have a contract between a client and server, you can't break it. But you usually still need to evolve it. This is why Java's support for versioning of classes and serialized objects is so important: Java's versioning support lets you evolve an API that forms a client-server contract without breaking parties familiar only with the contract's previous version. Therefore, this versioning support is one of the prime benefits of mobile objects over mobile documents.
Now that I've identified a few advantages of objects over documents, I'd like to consider some disadvantages of using the object approach for client-server interaction.
One potential disadvantage of the object approach is the amount of client-side overhead required to receive and use a mobile object. To host a Jini service object, for example, a client needs a Java virtual machine and a significant set of Java APIs. If your client only needs to deal with one or two kinds of services, taking the protocol approach will likely yield a client program with a smaller footprint. Why? Because the footprint of a client taking the object approach will, by definition, include the footprint of some incarnation of the Java Platform.
If the machine that will be hosting your client has sufficient resources (for example, if you will be running your client on a PC or workstation), selecting and installing an implementation of the Java Platform is quite straightforward. The future trend, however, is that more embedded devices will be connected to networks, and those embedded devices will want to offer and consume services across those networks. The economics of many embedded devices are such that very tiny increments in unit cost can quickly translate into real money. As a result, there is usually great pressure to constrain the computation resources of an embedded device to the minimum amount possible. If an embedded device can get by with speaking just a handful of protocols, therefore, the chances are good that the protocol approach will be deemed more economically viable than the object approach. (Note that in this discussion I am considering the economic effects of just the technical differences between the object and document approaches. Embedded device vendors who are interested in Jini also need to worry about signing a commercial agreement with Sun in conjunction with the Sun Community Source License (SCSL), and paying the required logo fee.)
If you are developing an embedded light-switch device, for example, and you want to offer a simple on/off-switch service across the network from the device, it would likely not make economic sense to put a JVM in that light switch. Instead, your light switch would talk a protocol.
Yet with a little help, a protocol-based service could still participate in a Jini federation. For example, somewhere nearby on the network to which the light switch is connected, perhaps in a set-top box or some other central host or gateway, a Java Platform could reside. On top of that Java Platform, some software (a "surrogate") could serve as an object-oriented front end to the light switches. Through this surrogate, clients and other services can use object-level interfaces to talk to the light switches. The light switches themselves talk to the surrogate via a network protocol. The surrogate project at Jini.org is currently working to define a standard way to bring protocol-based services to Jini federations using such an architecture.
Another concern raised by the mobile object approach is the increased potential for denial-of-service assaults, be they intentional or the result of an unintended bug. Although the Java Platform has an extensive security model that you can use to prevent untrusted code from taking many actions that could represent security threats, the security model does not guard against certain denial-of-service attacks. For example, a network-mobile object could fire off threads, allocate memory, or open sockets until the client-side resources are fully consumed. Or, network-mobile objects could merely neglect to release the fair share of client resources they legitimately claimed, resulting in a gradual resource leak.
Denial-of-service attacks in general are difficult to guard against. Even the protocol approach has potential avenues for these attacks. A Web server, for example, could serve up an infinite length page that would fill up the client browser's cache. Or, a server could be kept so busy dealing with fake client requests that it has no time to handle actual client requests. But the mobile object approach, because it involves injecting code into a client, opens up many more avenues for creative denial-of-service attacks compared to the mobile document approach. This aspect of mobile objects will generally make it infeasible to pull mobile objects into certain kinds of environments, such as mission-critical servers.
Another security-related disadvantage of mobile objects is that despite Java's security model, mobile objects still carry a greater security risk than mobile documents that don't contain executable code. Although Java's security model has extensive and flexible mechanisms to protect client-side systems from malicious or buggy mobile code, various implementations of the Java Platform have occasionally contained bugs that opened up security holes. Although these bugs have historically been promptly fixed, and no actual damage has ever resulted from them, the very possibility of future bugs raises the risk of hosting mobile objects compared to documents. In addition, the complexity of defining a security policy more lenient than the standard applet sandbox creates the risk of mistakes that could also open up security holes. For some security-sensitive environments, therefore, these risks of bugs in the security model implementation, or human error in configuring it, may disqualify the mobile object approach.
A third disadvantage of the object approach involves testing. My last C++ contract, before I switched to Java, was with a company that provided a distributed system that enabled servers at insurance companies to exchange data with clients at general agents. Over the years, this company had switched technologies several times. Given that customers are naturally hesitant to upgrade a system that is already working, especially if they would have to pay for the upgrade, this company was still supporting just about every client or server incarnation it had ever shipped. When it came time to test a new software release candidate, the software quality assurance (SQA) department would pull out its test matrix, a two-by-two matrix with clients on one axis and servers on the other, to plan the test. If the release candidate was a client, for example, SQA would want to test that client with all possible servers with which it might be used. How could SQA know if this client was going to work with every possible server if it didn't test the client with every one? (OK, to be honest, this SQA department didn't usually get to test the whole matrix. They just wanted to -- but that's a different story.)
One of the challenges of the mobile object approach to distributed computing is that, given that objects can glide so effortlessly across networks, it is almost impossible to know where an object you expect to send might land or where one you expect to receive might originate. The problem is not that the test matrix is too big, but that it is undefined. So how will you ever achieve robust, reliable distributed systems based on mobile objects if you can't test the performance of those mobile objects in all possible environments?
In the world of mobile objects, you simply can't test every possible client-server combination. You can, however, build a test matrix out of some known, and hopefully representative, subset of combinations, and test those. You can also do standalone tests for compliance with the contract to which the client or service is supposed to adhere.
I think that one key to achieving robust, reliable distributed systems, whether they're based on documents or objects, is a well-defined contract between the parties. Just as with mobile objects, you often won't be able to test a protocol-based server against every kind of client that will connect with it, but you can test the server's compliance to the protocol specification. To have meaningful compliance tests in either the object or protocol approach, the specification of the contract against which you are testing compliance needs to be detailed and complete.
In addition to having well-defined written specifications, it will also be important to have good compatibility test suites. Whether you are writing an implementation of a service API, or a client, a test suite that identifies areas where your client or service depart from the specification will be an invaluable tool. If a compatibility test suite isn't included along with the specification and javadoc documentation for an API, then everyone will likely interpret the documentation in slightly different ways, resulting in distributed systems that are slightly unreliable.
A fourth disadvantage of the object approach is that it doesn't scale well on the server side. To be certain a server will be able to handle large numbers of client requests concurrently, the programmer of a server needs to be able to control how and when finite resources such as threads, sockets and file handles are allocated to service client requests. If each client were to send the server a mobile object through which that server would interact with the client, then the server can't be certain how many threads, sockets, and file handles each of those client objects would consume, or when they would consume them. This is why in the standard Jini approach, the client receives a mobile object from the server, not vice versa. The client interacts with the service via the object interface. The server, by contrast, interacts with its service object (which is running in the client) via a network protocol.
The trouble with the standard approach is that, sometimes, we will want servers to act as Jini clients. I recently developed a Jini chat service for Sun Microsystems that, had a laptop not been stolen the night before, would have been demonstrated during the Friday keynote at JavaOne. To participate in the chat service, a client looked up the chat service in a Jini lookup service. The chat service objects of all clients exchanged messages and other information by writing entries to a JavaSpace. Once the client received and started using the chat service object, the object contacted a lookup service to retrieve proxies to a JavaSpace and a transaction manager. The chat service object then fired off several threads to renew leases and to monitor messages being posted into the chat room, and people entering and leaving the room.
While working on the chat service, I at one point had a phone conversation with John Whetherill, a Jini evangelist at Sun. Whetherill had implemented a chat program on the wireless RIM pager that was sold at JavaOne, and we needed to discuss how to integrate Whetherill's functionality with the Jini chat service on which I had been working. The only way to interact across the network from the RIM was via HTTP, so Whetherill had written a servlet that accepted HTTP messages from the RIM, which allowed RIM users to chat with each other. On the phone, Whetherill and I brainstormed about how he could enhance his servlet so that RIM users could participate in the Jini chat service.
My first inclination was to recommend to Whetherill that he simply turn his servlet into a client of the Jini chat service. Given that the chat service functionality was already implemented in the bowels of the chat service object, I figured the easiest thing for Whetherill to do would be to have his servlet grab a chat service object for each RIM user, and just invoke the methods of the object like any Jini client. The trouble, of course, was that each service object fired off several threads and opened several sockets, and we were expecting that upwards of 500 RIM users could potentially end up trying to chat at the same time. We talked about many different ideas, but we couldn't figure out how to get the servlet as Jini client approach to scale. In the end, Whetherill had his servlet simply interact directly with the JavaSpace, replicating in his servlet much of the code that made up the chat service object.
Ultimately, I think the way to deal with server-side scaling when servers want to act as Jini clients is to have multiple servers available and some way to distribute the incoming mobile objects among those servers based on their available resources. But I also suspect that the general case will be that servers will be speaking protocols to clients, or to proxy objects injected into clients.
One final potential disadvantage of mobile objects over mobile documents is download time. An early complaint about Java applets, for example, was that they took too long to download compared to Web pages. Of course, you can have huge Web pages filled with lots of gratuitous graphics. Such pages could easily take longer to download than a svelte Java applet. Download time has to do with bandwidth and the amount of data being transmitted, irrespective of whether the data is HTML text or bytecodes.
One of the reasons Web pages are perceived to be faster is because entire services are split up among many different pages, each of which is downloaded individually. For example, when using the Yahoo email service, you may end up downloading tens to hundreds of individual pages each session. Users have been trained to accept a certain amount of wait time for each Web page. When an applet comes down, however, the entire service is typically downloaded at once. After an applet arrives, the users will usually not have to wait anymore. But even if the wait time to download an applet is less than the cumulative wait time to download each individual page of a service, the users still perceive the applet is slower because the wait time is all up front.
I think the extra time cost of downloading large objects can be managed in three ways. First, objects, like Web documents, should be kept to the minimum size possible. Second, there is no reason objects can't download other pieces of a service on demand, much like web pages are requested one at a time, so long as users are trained to expect this kind of behavior. Third, you can use caching to reduce the distance objects must travel across the network, and hence the time the client (and user, if present) must wait for them. (Of course, increases in bandwidth will also help address this problem.) I believe download time is something that needs to be, and can be, managed, regardless of whether you are downloading objects or documents.
To me, the primary advantage of the object approach over the protocol approach is flexibility. By moving the contract between parties of a distributed system from the bits and bytes of network protocols to the higher level of abstraction of object interfaces, you gain flexibility in how you fulfill and evolve that contract. Because the contract of an object interface is couched in terms of behavior, not in terms of information (as with a document's data model), the contract of an object can be more abstract than the contract of a document. The more abstract a contract is, the more ways that contract can be fulfilled.
Perhaps the most important flexibility offered by the Jini service object is the service provider's ability to decide what protocol, if any, its service object will use to talk across the network. A service provider could, for example, fully implement the service locally in the service object itself. Or, a service provider could make the service object an RMI or CORBA stub that forwards all method invocations across the network to a remote object. The service object could also simply translate client requests received through the object's interface into the bytes of some proprietary socket protocol understood by the server. Or the service object could be a "smart proxy," which partially implements the service locally but sometimes enlists the help of a server or servers across the network. In short, a Jini service object gives the service provider the flexibility to choose the best protocol for each situation.
In his article, "The End of Protocols" (Resources), Jim Waldo, Sun's chief Jini architect, put it this way:
Systems that are based on a protocol [...] need to fit the needs of all clients and services to the single protocol. The protocol must be completely general, and as is often the case when something needs to be good for everything, these protocols are often less than optimal for particular, specialized communications. Protocol design, like any engineering design, is often a trade-off between efficiency and generality. In systems that are designed around a one-size-fits-all protocol, such decisions need to favor generality. [...] This [Jini's mobile proxy object] approach gives great flexibility to what protocol is actually used. Different services can invent their own specialized protocols that are optimized for that particular pair of proxy and service. Protocols can evolve over time as new ideas are tried out.
This flexibility in deciding how to implement the service object interface amounts to flexibility in managing the network -- whether to talk across the network at all, and if so, what protocol to use. This flexibility is extremely useful when working with distributed systems, because you get more choices in dealing with the most variable and unpredictable part of the distributed system: the network itself.
To discuss the material presented in this article, visit my discussion forum at: http://www.artima.com/jini/jf/objvsdoc2/index.html.
To discuss the material presented in this article, visit my discussion forum at: http://www.artima.com/jini/jf/objvsdoc1/index.html
This article was first published under the name Objects versus Documents for Client Server Interaction, Part 2 in JavaWorld, a division of Web Publishing, Inc., July 2000.
Have an opinion? Be the first to post a comment about this article.
Bill Venners has been writing software professionally for 14 years. Based in Silicon Valley, he provides software consulting and training services and maintains a Web site for Java and Jini developers, artima.com. He is author of the book: Inside the Java Virtual Machine, published by McGraw-Hill.