Sponsored Link •
In a recent blog post, John Clingan traces the patterns of application deployment from one application per server to a model where an app "runs somewhere on a grid," and to a dynamic infrastructure that includes the virtualization of app containers and applications as well, trends that hold important implications for developers.
In many ways, Java pioneered a mass-market demand for virtualization: Java's "platform-independence" is the result of Java code running in a Java Virtual Machine, isolating that code from most differences in the underlying operating systems and hardware.
Virtualization as "abstraction of resources across many aspects of computing" (see Wikipedia), has recently taken several new turns, mainly in response to the need to simplify application deployment and management. While virtualization of storage and even network resources has been popular for years—think SANs (storage-area networks) or NFS, for instance—more recent tools, such as XEN and Solaris containers allow the virtualization of operating-system environments or complete servers. And it doesn't have to stop there: The virtualization of an entire data center environment might not be far off.
John Clingan argues that "virtualization is about to hit mainstream, if it hasn't already," fueled by the confluence of several enabling factors such as the price-performance of servers capable of running virtualization software, the maturity of high-quality virtualization tools, and the need to render data centers more dynamic:
The customers I have worked with over the years maintained fairly rigid environments. "This server runs this application". "These servers run these applications". "Our department purchased these servers out of our budget so they run our applications." It has been like this for over a decade now.
[...] The move to virtualization is enabling a more dynamic infrastructure, and I wonder how long it will take customers to move from "these applications run on these virtual servers" to "our applications run somewhere on this grid".
Clingan suggests that beyond the virtualization of grids are application-container—and even application—virtualization techniques, what he calls a "dynamic infrastructure:"
It won't take that long for customers to move from virtualization to a dynamic infrastructure[...] Dynamic infrastructure [...] includes not only the OS and hardware, but application containers and the applications themselves.
Just as application server containers, such as J2EE, allow one to plug various app server implementations into an environment with no, or at most minimal, changes to the applications running in those containers, standardized application-level interfaces might also emerge over time to allow the virtualization of entire application domains. That could have interesting implications for developers.
The most important implication might be that more of what used to be called application-layer software is pushed into the infrastructure layer. For instance, the Java Content Repository API (JCR) is an effort to standardize through the Java Community Process an interface to content repository systems from Java code. As more content repository products implement the JCR API, an organization will be able to simply plug in any JCR-compliant implementation into its data center, alleviating the need for users to concern themselves with the brand or maker of a particular content-management product. The Java Data Mining API (JDM) is another example of standardizing common functionalities required in an application domain. Implementations of JDM will allow data mining functionality to be increasingly pushed into the enterprise IT infrastructure from what today is still implemented by proprietary vendor products.
Since virtualization often occurs in the infrastructure layer, moving more application functionality into the infrastructure layer serves virtualization. Indeed, application domain-specific standardization efforts are often driven by the need to virtualize enterprise resources. With standardization efforts under way in domains as diverse as health care and automotive finance, enterprise application development might take on several new meanings.
In one camp will be developers implementing standardized application interfaces, such as a content management system or an insurance claims management system. These developers will likely work for vendors, or in open-source communities, that will need to differentiate from competitors by addressing specific application concerns, such as scalability, ease-of-use, or the need for special configurations. The model here might be existing infrastructure vendors, such as purveyors of application-servers or databases.
In another camp will be developers creating modules or add-ons to such systems. Most of these developers will work for enterprises using these systems, or via commercial offshoots of open-source projects.
In his blog post, John Clingan points out that "virtualization doesn't drive consistency. [...] It won't take that long for customers to move from virtualization to a dynamic infrastructure." But virtualization fueled by open standards and open-source implementations of those standards could lead to consistency, as it already has in the area of application servers and databases, for instance.
Thus, a developer working for, say, an insurance company, will not have to design and implement a claims management system. Rather, there would be one or a handful of standards, and a developer or the IT manager would be able to choose from several competing implementations of those standards that together provide most of the claims management functionality. Because those products, whether open- or closed-source, will implement the standards, an organization will be free to "virtualize" such a resource, and plug in a different implementation at will.
Just as few enterprises today would pay their developers to code up an app server, a transaction processing monitor, or an HTTP server, in the future, few enterprise developers might be tasked to write an inventory management system, a data mining application, blogging software, or insurance claims management code. Instead, developers might increasingly participate in open-source communities centered around implementations of standardized application interfaces, and enterprises would benefit from using the resulting high-quality software, which they could deploy in a virtualized fashion. In addition, enterprises would pay developers mainly to customize or add modules on top of services that are by then pushed deep into the enterprise infrastucture.
I would certainly welcome the day when I no longer would have to code another user management module, inventory system, or workflow application, and instead just rely on high-quality implementations of such components and focus on adding value to my end-users by providing features they really care about.
But how long before standardized application interfaces become wide-spread? And what kinds of application interfaces should be standardized? The Java Content Repository and Java Data Mining API designers factored common application functionality into an API that most of the vendors in that domain could agree on. Could that work be replicated in other domains, too? How should developers or organizations drive that kind of standardization? Would this be through an organization such as the JCP, or through an open-source community such as Apache? And what would the proliferation of such widely available standards and associated implementations mean to developers?
|Frank Sommers is a Senior Editor with Artima Developer. Prior to joining Artima, Frank wrote the Jiniology and Web services columns for JavaWorld. Frank also serves as chief editor of the Web zine ClusterComputing.org, the IEEE Technical Committee on Scalable Computing's newsletter. Prior to that, he edited the Newsletter of the IEEE Task Force on Cluster Computing. Frank is also founder and president of Autospaces, a company dedicated to bringing service-oriented computing to the automotive software market.
Prior to Autospaces, Frank was vice president of technology and chief software architect at a Los Angeles system integration firm. In that capacity, he designed and developed that company's two main products: A financial underwriting system, and an insurance claims management expert system. Before assuming that position, he was a research fellow at the Center for Multiethnic and Transnational Studies at the University of Southern California, where he participated in a geographic information systems (GIS) project mapping the ethnic populations of the world and the diverse demography of southern California. Frank's interests include parallel and distributed computing, data management, programming languages, cluster and grid computing, and the theoretic foundations of computation. He is a member of the ACM and IEEE, and the American Musicological Society.