The Artima Developer Community
Sponsored Link

Weblogs Forum
Wither Those Tiers

4 replies on 1 page. Most recent reply: Jun 10, 2006 11:00 AM by Daniel Read

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 4 replies on 1 page
Frank Sommers

Posts: 2642
Nickname: fsommers
Registered: Jan, 2002

Wither Those Tiers (View in Weblogs)
Posted: May 11, 2006 12:15 PM
Reply to this message Reply
Summary
The last decade witnessed the rise of the three-tier enterprise architecture. The experience learned and the need to process increasing amounts of data increasingly fast, are now leading some to rethink the wisdom of three-tier design. One alternative solution moves business logic and data management into a data grid.
Advertisement

The three-tier model, in which the presentation, the business logic, and the persistent data store each defines a unique application tier, became the dominant enterprise architecture design in the 1990s. While separation of tiers along those lines is enshrined in enterprise frameworks, such as J2EE, some now question the wisdom of architecting a new breed of applications along the three-tier model.

What originated as an extension to the two-tier client-server design seemed to fit the needs of Web applications well: An HTML-based user interface accessed from a Web browser, and its definition generated in a presentation tier; data stored and retrieved from a shared database management system; and business logic placed between data and presentation in a business logic tier. Darleen Sadoski of GTE and Santiago Comella-Dorda of Carnegie Mellon's Software Engineering Institute summarize the utility of the three-tier architecture:

The three tier architecture is used when an effective distributed client/server design is needed that provides (when compared to the two tier) increased performance, flexibility, maintainability, reusability, and scalability, while hiding the complexity of distributed processing from the user.

While a large and vocal opposition to the three-tier model always existed, maintainability and reusability were obvious benefits of the three-architecture, as each tier depends only on the one tier beneath it: Changes in the database affect only the business logic tier, and evolution of business logic impacts the presentation layer only.

Scaling a three-tier architecture is not an obvious task, however. To paraphrase, Amdahls's Law, a system scales only to the degree its weakest link scales. As more clients access the presentation tier, if those presentation requests must also access the business logic tier, the business logic tier must also scale along with the presentation tier. And if the business logic relies on data stored in the database, then the database must scale up in tandem with the presentation and business logic tiers.

Scaling the tiers

There have been three principal solutions to scaling a three-tier system: vertical, horizontal, and caching. Vertically scaling the system involves increasing the capacity of components corresponding to each tier so each can handle higher workloads. Scaling in this manner involves scaling both the hardware and the software corresponding to each layer. For instance, scaling the database layer might require server hardware with symmetric multiprocessing capabilities (SMP), or even a cluster of commodity servers. Taking advantage of SMPs or clusters, in turn, will require software designed to operate in such environments.

Scaling horizontally involves distributing the load of each layer among multiple logical operational units. For instance, the database tier can be scaled by partitioning that tier into multiple databases, and then coordinating work between them through distributed transactions or some other coordination mechanism. The presentation tier can scale horizontally by routing incoming user requests to a server in a pool of server nodes dedicated to presentation logic. Still, the effects of horizontal scaling will be limited by the scalability of the underlying tier. If the presentation tier needs to access business logic, presentation can be processed only as fast as the business logic layer allows. A database layer might similarly become a bottleneck if business logic processing requires access to the database.

Caching aims to facilitate more independent scaling for each layer by managing the relationship between layers with a view to scalability. For instance, cache middleware between the presentation and business logic layers can more intelligently transfer data from the business to the presentation tier by, say, prefetching results, or performing some processing without the need to access the business tier (such as providing summary results, for instance). And armed with knowledge of the database tier's load characteristics, caching can arrange database reads and writes at times that maximize database throughput.

Each of these scalability techniques is recursive: Vertical scaling can be achieved by virtualization of horizontally scaled resources—such as presenting a single-system image of a cluster— and horizontal scaling is often indistinguishable from vertical scaling from the layer above—a cluster of database servers or a partitioned database may appear to the business tier above as a single database. Caching itself can be scaled vertically—allocating a bigger in-memory cache, for instance—or horizontally with a distributed cache. Finally, employing several caching layers can make caching itself a recursive technique.

Chiefly because scaling each tier in the three-tier architecture requires specialized software and hardware, the conceptual three tiers often take on physical manifestations in the server room. Database servers often sit on dedicated shelves, and the presentation and business logic servers are similarly easy to point out. Physical separation of tiers simplifies system and network administration. Ironically, such physical separation also leads to scalability problems.

From database to data grid

I recently caught up with Nati Shalom, founder and CTO of GigaSpaces, whom I have known for a long time from the Jini community. Shalom’s company designed a product based on JavaSpaces, and is popular with Wall Street firms and telecoms operators that routinely process large amounts of data. "Today, if you look at the three-tier approach, it's both a logical and a physical architecture," noted Shalom. "That results in a lot of static elements in the application, and that is where bottlenecks are created."

Shalom pointed out two application patterns that are becoming increasingly important, yet are ill-suited to the classic three-tier division. The analytical model takes as its input a large amount of data—server logs, usage pattern logs, trading data, medical measurement records—and aims to derive some new information from the analysis of that data. While in the past such processing was the domain of only the largest organizations—chiefly because of high storage and processing costs—the analytical use-case is becoming increasingly a requirement for all sorts of applications. For instance, given a user's previous history of visiting pages on a Web site, analytical processing of that data can determine what ads that user is most likely to click on, thus supporting a click-through advertising model, currently one of the more successful business models on the Web.

While the analytic model is read-mostly, Shalom's other emerging application model requires both reads and writes of data. In that application model, which can be called a data-flow model, a continuous stream of data arrives at one end of the system, and the application must perform some processing on that incoming data stream, saving the data along with the processing results at the other end. Such applications include financial trading and real-time or monitoring applications. However, this category fits many dynamic Web applications where monitoring the system's health and performance are critical application requirements, or where a large part of the output is based on processing the input (as, for instance, in portal applications).

Both of these application types must deal with large amounts of data, and they must perform often compute-intensive processing on that data fast. To allow such systems to scale, it is advantageous to place the computation as close to the data as possible. Placing business logic in a separate tier from the database layer, especially if that tier corresponds to physical separation, prevents such systems from scaling.

Instead, these applications require breaking down the barriers between the various architecture tiers. GigaSpace’ Shalom notes that "one solution [to moving computation close to the data] has been to delegate [much of the processing] to just one tier—often the database. But that just pushes the bottleneck into the database. Instead of burdening some of your resources with too much processing, it's much better to use all your resources intelligently, and route transactions to where those transactions can be performed most efficiently. [With intelligent transaction routing,] rather than having a centralized database, what you then have is a data grid."

Transaction routing with JavaSpaces

One way to perform such transaction routing is via the JavaSpaces model. JavaSpaces, part of the Jini Starter Kit, is a Java-centric implementation of the tuple spaces paradigm popular in high-performance computing applications. JavaSpaces in the context of these application domains can be imagined as a cache that coordinates work between master servers—those submitting computing tasks to a tuple space—and workers that take small chunks of tasks, perform computations on those tasks, and place the results back in the tuple space.

The result is an architecture that provides a data grid with a JavaSpace-based interface that allows reading and writing data to and fro the data grid. Clients of this grid can belong to a presentation layer, such as Web servers allowing one to submit jobs and present the results, or even other data grids that coordinate work in support of a common operation. Such data grid elements can even be federated into a larger grid.

Data grid clients access the grid only through proxy objects. JavaSpace proxies are Java objects that can incorporate logic to route requests for data access or data input to the appropriate node, although such dynamic transaction routing is not part of JavaSpaces. Taking advantage of Jini's dynamic code mobility feature, such application logic does not occupy its own architecture tier, but is instead downloaded to JavaSpace clients at runtime.

While the space-based paradigm has proven to scale both the analytic and data-flow models, it requires a unique programming model that developers may not be familiar with. In addition, JavaSpace nodes must be installed and configured correctly. Several commercial JavaSpaces implementations help solve these challenges. The GigaSpaces implementation, for instance, provides data grid access via the JMS and JDBC interfaces as well, in addition to implementing the JavaSpaces interface.

In addition to commercial solutions, several open-source tools now ease working with JavaSpaces. The open-source RIO project, for instance, helps better utilize available computing resources in a JavaSpace-based data grid. RIO helps match the capabilities of a computing resource, such as a server, with the requirements of a computing task. For instance, if a job requires, say, a fast processor, or certain amounts of main memory, the job's submitter can describe that requirement to the RIO system. Each RIO node, in turn, advertises its capabilities, and the RIO middleware matches jobs with the appropriate node.

Dennis Reedy, who developed RIO while a member of Sun's Jini team, and is now GigaSpaces' VP of Advanced Technologies, notes that such resource matching must occur in a dynamic fashion, keeping system administration of a RIO cluster to a minimum. "You had to log into [RIO nodes] before, and had to manually configure them. The bigger cluster our customers had to deal with, the more work this turned out to be.” Not surprising, Reedy worked to automate that deployment in the GigaSpaces offering. “Under the covers, the system measures each node's capabilities, and attaches behavior to the nodes based on those measurements," says Reedy. "This allows us to provision more cache nodes as needed. We can scale the system up dynamically, but also scale it down," if computing resources are left unused.

Tierless architecture

If a data grid with a JavaSpace interface were to divide enterprise architecture into clients and servers, it could be considered a throwback to the client-server days, from which the three-tier design was an evolutionary step. However, code mobility, as well as the dynamic balancing act performed by an intelligent space implementation, renders clients and servers interchangeable. For instance, a node may perform computations in response to a client's request, acting as a server, but the same node may become a client to another node's space service. Jini's dynamic code mobility ensures that the application logic is distributed at runtime according to the space topology of the moment.

Taken to its limit, the space-based design with code mobility can lead to a withering of boundaries between application tiers. While a developer may still consider architecture tiers when constructing an application, those divisions would have few manifestations in how the application executes, let alone in physical deployment. Instead, the space-based model helps virtualize those computing resources. As John Clingan remarked,

The move to virtualization is enabling a more dynamic infrastructure, and I wonder how long it will take customers to move from "these applications run on these ... servers" to "our applications run somewhere on this grid".

While the three-tier model will always have its uses, separating application logic from data management will prove increasingly hard in the presence of large amounts of data and the need to rapidly process that data given finite computing resources. In addition, the trend toward rich-client applications that allow the distribution of presentation logic, and also some application logic, away from the data center, and to users’ desktop systems, indicates a path to a relatively flat, quasi-tierless, data grid-style enterprise architecture, combined with some form of mobile code—whether in the form of downloadable Java classes or JavaScript libraries—pushing presentation logic out to clients.

How do you think the three-tier model will evolve? And where do you see the role of data grids in an enterprise architecture?


Stefan Lesser

Posts: 1
Nickname: slaser
Registered: Aug, 2005

Re: Wither Those Tiers Posted: May 12, 2006 12:20 AM
Reply to this message Reply
Finally, this is all about control.

These grid ideas reduce the amount of direct control you have over your applications, over your data, over your system.
You still have control, but it's much harder to determine on which machine is which information.

I like this! This is a perfect way to keep the complexity low and manageable.

Managers don't like this. They want control. Well, better said, they need a feeling of control. And I think it's perfectly possible to implement mechanisms to provide this feeling of control, although a new kind of architecture is used.

The success of solutions like the data grid will not be decided on the architecture itself, but on the feeling of control these architectures can provide.

Do you know this centralization/decentralization waves IT always goes through?
There's a huge centralizing trend going on for years now. Every company centralizes everything today.
I am sure this is about to change into a huge decentralization trend soon. Virtualization is a first (very small) step into that direction.

I'm looking forward to times when I don't get asked "How can we move all IT to the headquarter and manage everthing centrally" but when the first one asks me "How can we move everything to our outlets and get rid of this data center?"

Technology is ready for this scenario. Are we?

Mike Petry

Posts: 34
Nickname: mikepetry
Registered: Apr, 2005

Re: Wither Those Tiers Posted: May 12, 2006 5:24 AM
Reply to this message Reply
This is exactly some of the ideas I have been looking for. I will follow the links and get up to speed on this stuff before I fully form an opinion but I do have some immediate thoughts.
The production, transmission, and storage of data is outpacing Moore's law. The ability to utilize information is bounded by the ability to process data. There needs to be a new architectural models to manage very large amounts of data.
Pulling large amounts of information into a data model that must be managed for consistency, and concurrent access is not a good thing, but coupling business logic tightly with data storage has its own issues. The impedance mismatch between data storage and object-oriented programming has been well documented but it is interesting to note that this old problem may have a large impact of future architecures.
Mechanisms like facades (or wrapper classes) around jdbc Resultsets seem a little cheesey.
Another thought is that a data model is just a cache. Caching is not something to be taken lightly and probably should be considered as an optimization technique. One idea would be to cache to a local database that would replicate to a centralized database across a LAN/WAN. The benefits would be built-in concurrent access and transactions. The downers include the replication process itself.
An idea is to write to a database that is not quite a local cache, or a central server, but instead, a cohesive database that performs one duty on a grid of like nodes.
I think this is the grid approach and it does sound interesting.
thanks

Cameron Purdy

Posts: 186
Nickname: cpurdy
Registered: Dec, 2004

Re: Wither Those Tiers Posted: May 12, 2006 11:32 AM
Reply to this message Reply
There's an eery similarity here:

<blockquote>The Data Grid Aggregate Functions capability in Coherence 3.1 takes this parallel processing concept to its natural conclusion for algorithms such as Monte Carlo simulations: Each server in the Data Grid will process in parallel the data that is available locally, and then the partial results from each server are rolled up into a final aggregate result, achieving the maximum theoretical throughput as defined by Amdahl's Law. Applications such as algorithmic trading and real-time risk can plug in their own custom aggregate functions to achieve unparalleled throughput in grid environments.</blockquote>

Maybe at least include a link?

Peace,

Cameron Purdy

Tangosol Coherence: Clustered Shared Memory for Java

http://www.tangosol.com/coherence.jsp

Daniel Read

Posts: 2
Nickname: danielread
Registered: Aug, 2003

Re: Wither Those Tiers Posted: Jun 10, 2006 11:00 AM
Reply to this message Reply
Just wanted to say that I very much enjoyed this article. I don't typically work on the two types of systems described as appropriate for the grid model, but it's great food for thought. Also, I agree with the commentor that desire for control (or perceived control) might be a drag on adoption of these ideas in corporate and government settings. But where the system design problems are specialized enough, success depends on letting go of these kinds of fears.

Thanks again,
Dan

Flat View: This topic has 4 replies on 1 page
Topic: Wither Those Tiers Previous Topic   Next Topic Topic: Microsoft Under Attack

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use