Froth and Compatibility

A Conversation with Rob Gingell, Part II

by Frank Sommers
January 20, 2002

Summary
Artima.com columnist Frank Sommers interviews Rob Gingell, Sun Microsystems' fellow and chief engineer and chair of the Java Community Process, about the pressures on vendors to both create froth—to add value on top of standards—and maintain compatibility in a multi-vendor industry.

When you write a piece of Java code, you know that code will run on a variety of machines: Windows, Linux, the MacOS, the Palm, and so forth. Platform-to-platform Java portability works because the Java Virtual Machine (JVM), the Java byte codes, and your program's APIs all adhere to strict specifications.

However, what if those specifications, and their implementations, were developed in an open source manner? Would Java still preserve its remarkable platform independence? Or would it be fragmented into myriad incompatible versions and implementations? How could you be sure your servlet or MIDlet will work when executed on a different JVM and OS?

These are just some of the questions the Java Community Process (JCP) recently examined. The JCP, the primary forum for Java's advancement, in theory is an open process: Anyone paying a small membership fee can participate. In practice, it has been branded as a politically charged club of large corporations, with Sun at the helm, all vying for a piece of the Java pie. Due to its restrictive licensing model, the open source community has completely shunned the JCP.

In response to those charges, and for fear of missing out on the open source action, the JCP adopted a new set of rules in November 2002. The most important of those rules explicitly allows JCP members to develop new Java standards in an open source manner, while still under the JCP's umbrella and official blessing.

Frank Sommers asked Sun Microsystems fellow and chief engineer Rob Gingell, who also chairs the JCP, about the impact of those changes. In this interview, which is being published in two installments, Gingell tells us what causes fragmentation in a marketplace, and how Java can avoid that danger:

  • In Part I. Standards and Innovation, Gingell talks about open source licensing, source and binary compatibility, binary standards, and the JCP.
  • In this final installment, Gingell discusses the pressures on vendors to both maintain compatibility and add value in a multi-vendor industry, how competing companies can cooperate through the JCP, and how the voices of small companies and individuals can be heard in the JCP.

An Emerging Application Binary Interface

Frank Sommers: The Solaris application binary interface (ABI) has served to ensure compatibility between Solaris and its applications. Will the Java binary format and JVM assume similar roles in Sun's future? In other words, will Java be Sun's universal ABI?

Rob Gingell: Yes, the primary ABI of our future lies in IP/JVM (Internet Protocol/Java Virtual Machine). The JVM serves as the instruction set architecture. A collection of IP-based protocols serves the role we formerly ascribed to operating systems. That is a softer definition than what we used during most of the 1990's—namely Solaris/SPARC. That doesn't deny Solaris/SPARC, Unix, or microprocessor development in general, but it does recognize the growth of a new class of applications, enabled by the network's ubiquity. Those applications add to our existing business in a powerful way.

Innovation versus Fragmentation

Frank Sommers: Sun is now an active Linux vendor. You had mentioned in previous interviews your desire to merge Solaris and Linux into something like SunLinux. Do you imagine SunLinux to keep a certain edge over other Linuxes? If so, would that cause a fragmentation in the Linux market—for instance, some things, such as JVMs, might run on the Sun variety, but not on other Linux species? Conversely, if Sun does not add significant capabilities to its Linux version, what advantages will SunLinux have over, say, RedHat's Linux?

Rob Gingell: Your question addresses the fundamentals of how a multi-vendor industry operates for the good of a common customer base. The "open systems" promise to customers was the ability to treat every purchase/deployment decision independent of all others. There's no vendor lock-in. Indeed, customers lock in vendors by the standards they hold the vendors to. That's the ideal, in a steady state.

The problem with that ideal is that the needs don't stay constant, and customers constantly seek improvements from suppliers. Products improve by such actions. If a customer genuinely depends on a capability, he or she is locked in to those who supply it until, or unless, everyone supplies it.

Here's the basic conundrum: If you only implement the standard, you don't solve any new problems. If no new problems are solved, where does the evolving "practice" come from that finds its way into new standards? If you use a new thing, aren't you thus locked in? How do you meet new needs without doing a new thing?

Here's how life works: Assuming a shared initial condition, some derivation will occur, often in cactus- like fashion across an industry through competition. With time, the economic benefit of standardization is sought to codify what has emerged as existing practice. If the derivation branching grows too large, we criticize it for being fragmented. If the derivation is zero or too small, then we criticize it for being stagnant, non-innovative. There's a happy medium in which the "froth" ahead of the standard progresses the industry, but doesn't damage the base of commonality that defines a marketplace.

It's useful to remember that Sun created its operating systems group not to make an OS as such, but rather to be able to competently build systems, including engineer at the operating system level. Until about 1985, we largely kept Sun's Unix as a set of differences to whatever the state of Unix's BSD distribution was. Around 1985, though, the weight of our differences led to a flip such that we treated new Berkeley distributions as source material to update or contribute to our source base. With time, we accumulated an operating system.

In a hypothetical world—in which the Unix community means largely a set of shared source code—the reason Sun would continue to have an OS group is the same reason we had one in the first place: To be able to create products that work at all levels of the system. Sun now has the best team in the industry. The ability to direct those resources to problems important to us is valuable, far more valuable than the software residue that results from having directed them there. We would indeed argue through our sales force that our rendition is better, but we'd make that argument based on our ability to create, maintain, and evolve it—and less on its uniqueness.

That isn't a terribly new idea: NFS (Network File System) was the first Sun-originated addition to Unix as a supporting structure of distributed computing. We immediately made it available to all comers employing the practices of the day. We also did it almost entirely "behind the curtains." NFS was also remarkable in its transparency and its systematic approach to working behind the standard interfaces, although it cut corners on obscure pieces of the standards that were inconsistent with, or obstructive to, the presence of networking.

In creating NFS, we did fragment the source code technology of a Unix implementation by introducing a variation. We acted to close the fragmentation by making NFS available to everyone, and we also did it in a fashion that largely worked with, instead of conflicted with, the expectations of existing applications. There was no fragmentation of binaries or source code—making the work sharable across the industry's technology space.

Currently, we expect the most growth in applications built around the network, rather than applications of a single computer. Java is the primary way that will happen, and Java's developer pool already vastly dominates that which Unix has ever had. We don't tell anyone to convert to Java—this isn't about changing Unix to something else as much as it is simply recognizing that new application forms are being created, forms that are largely additive to the world we already have.

But that doesn't make the fragmentation question moot. It applies to Java and to the Unix environment that we're most likely to use in fabricating systems products. Whatever the domain, it's an exercise in tension management, and also one of philosophy. Do you innovate in a manner that is destructive of the tension or supportive?

Web Services Interoperation

Frank Sommers: A JVM must ensure that it interprets byte codes according to JVM specifications. Those specs, in turn, go a long way to ensure that a piece of byte code produces similar results across JVMs. In the world of XML-based Web services and Simple Object Access Protocol (SOAP), no such specifications exist. Indeed, SOAP interoperability is already an intense discussion topic on mailing lists. Do you believe that differences in data-encoding mechanisms and, consequently, on-the- wire incompatibility could lead to the fragmentation of the XML-Web services world similar to how Unix fragmented--for example, "Apache SOAP Web service," ".Net Web service," or "SunONE Web service," -- similar to HP-UX, SunOS, or Linux?

Rob Gingell: Such differences would damage the premise of Web-based services architectures using those protocols, because you wouldn't get interoperability between products. Since the whole point was to construct functionality from possibly unrelated services, that lack of interoperability would represent either a fatal flaw or limit to the application of the protocols.

However, that doesn't need to happen. To a large extent, network protocols enjoy some of the characteristics of binary standards in that they're presented with firm "yes" or "no" constraints and standards with respect to interoperability. I think we won't really see problems at the SOAP level. Instead, we might see Web services interoperation problems at the application layer of the stack that uses SOAP as a session layer protocol. That application layer might embed some non-specified information such that you get interoperability only by sharing implementation. A hopeful sign here are activities like those of the SOAP Builders group, an organization of industry-wide developers doing interoperability testing.

The Problem of Underspecification

Frank Sommers: If differences in XML-based data encoding could lead to interoperability problems between Web services, wouldn't it be better to use Java byte code as data encoding for Web services, and turn Java into an open-source Web services standard?

Rob Gingell: If you define "Web services" as a network architecture that uses HTTP as a transport, SOAP as a session layer, and XML as a presentation layer, then you couldn't literally do that. Cynics might argue that that was the whole reason "Web services" was defined the way it was. You could, of course, have a different protocol architecture defined as you suggest—Jini does exactly what you describe.

Jini addresses the sort of issues you're concerned about: It solves the problem of specification by sending an implementation around to compensate for a service's underspecified elements: "You don't know about this, but here's the code that makes it possible for you to know." Jini thus addresses the semantic problem that data, alone and underspecified, potentially presents.

The basic problem is underspecification, which results from laziness, obfuscation, and/or failures of imagination. Various technology bases differ in how easy, or hard, they make it to underspecify something that appears to be working. The abstract version of this problem relates to how one creates a class hierarchy that you can rationally subclass without creating the problem you describe. For XML, or any other data structure, it's about creating data that can be extended and still be useful by those who don't understand the extension.

In Jini environments, the only real common practice you need is to use Java. From that presumption, an application can provide you all the semantic extension you need through mobile code. Mobile code, in turn, is enabled by the abstraction of the JVM, which makes the underlying hardware's instruction set unimportant. This seems like a great idea, unless your business depends on what that instruction set is, or on implementation-defined behaviors. But that's an area for cynics to delve into.

Proprietary J2EE Extensions

Frank Sommers: The tactic of "embrace and extend" is often associated with Microsoft. However, it seems that lately the J2EE community has started to experience a similar phenomenon: As customers request features, J2EE vendors are, naturally, obligated to provide those added capabilities in their J2EE app servers, leading to vendor-specific extensions. As developers take advantage of WebLogic-, WebSphere-, or SunONE-specific extensions, their code becomes less and less portable across J2EE implementations. Could that trend lead to the same vendor lock-in the J2EE specs were supposed to protect against? Does the JCP, or some other Java process, address that issue?

Rob Gingell: Earlier we talked about the tension involved in managing the froth on top of standards. In the J2EE marketplace, the froth has thus far been minimal. The multi-billion dollar J2EE industry is made up of a small number of commercial J2EE products. The J2EE reference implementation, which can be downloaded at no cost from http://java.sun.com, has hundreds of thousands of downloads. Why all those downloads?

J2EE application developers often use the reference implementation as the development platform, specifically to avoid getting inadvertently locked in to a specific J2EE implementation. At last year's JavaOne, we announced the Application Verification Kit: A tool developers can use to ensure their applications depend only on what they choose to depend upon.

As a standard, J2EE has tended towards minimal froth. That's good for customers in one respect, but you could argue it might be better if a few more competing enhancements existed beyond the specification. There is no absolute right answer to this. It's a tension that needs to be maintained.

Early on, as the platform was being created, keeping the froth minimal was probably helpful in bootstrapping the marketplace. With the marketplace seemingly well-developed at this point, do we initiate more advancement by only bringing things through the JCP, or do we do things in products and then bring them to the JCP? Or, do we develop some things in the JCP that represent commonly agreed-upon attributes of new capabilities, and sort out the not-agreed-upon parts in products? How does one best find what practice to codify without doing some of it?

I don't think we should aim to minimize the froth. Part of managing the tension involves having a marketplace of developers who understand their trade-offs, and a marketplace of customers who demand the froth be kept in check. Tools, such as the Applications Verification Kit, ensure that when developers and their customers choose a value-added, pre-standard feature they know they're using just what they need—and no more—and that their continued loyalty to that extension depends on it becoming common practice.

Ultimately, all standards exist to advise us what not to waste time on—to cover things in which variation is not value-enhancing and, in fact, is value-detracting. For technologies in rapid change, keeping things close together helps accelerate the entire market and encourages adoption. For mature technologies, the boundary conditions become more what the market can tolerate.

Members of the JCP executive committee have held intermittent discussions about that issue. From my perspective, it's a thoughtful conversation that has presented hard issues for people to talk about. There will never be an answer in terms of a set of absolute thresholds for defining what should be done in or out of the JCP. There's just this tension to be managed. That the JCP has been, and continues to be, a community dedicated to the proposition of compatibility, having enabled the more than three million Java developers, speaks well of the desire to manage the tension to avoid destructive variation.

Resources

The Java Community Process home page:
http://www.jcp.org/en/home/index

Information on the Java Application Verification Kit for the Enterprise:
http://java.sun.com/j2ee/verified/

Overview of the Java Community Process:
http://www.jcp.org/en/introduction/overview

Timeline through which a JSR passes as it makes its way through the Java Community Process:
http://www.jcp.org/en/introduction/timeline

JCP FAQ:
http://www.jcp.org/en/introduction/faq

A brief summary of the Java Community Process procedures:
http://www.jcp.org/en/procedures/overview

Overview of Java Service Requests (JSRs):
http://www.jcp.org/en/jsr/overview

Talk back!

Have an opinion? Be the first to post a comment about this article.

About the author

Frank Sommers is founder and CEO of Autospaces, a company dedicated to bringing Web services and Jini to the automotive sales and finance industries. He is also Editor-in-Chief of the Newsletter of the IEEE Task Force on Cluster Computing, and Artima.com's Web Services columnist.