The Artima Developer Community
Leading-Edge Java | Discuss | Print | Email | First Page | Previous | Next
Sponsored Link

Leading-Edge Java
Distributed Web Continuations with RIFE and Terracotta
by Jonas Bonér and Geert Bevin
August 8, 2007

Page 1 of 3  >>

Advertisement

Summary

In this article, we discuss how the RIFE Web framework helps you become productive and efficient in building conversational Web applications. Productivity with RIFE is in large part due to RIFE's unique approach to Web development—its use of continuations for conversational logic, and complete integration of meta-programming to minimize boilerplate code.

We also introduce you to Terracotta and it's JVM-level clustering technology, and show you how Terracotta and RIFE can work together to create an application stack that allows you to scale out and ensure high-availability for your applications, but without sacrificing simplicity and productivity. This means working with POJOs, and minimal boilerplate and infrastructure code.

Introduction

For years, the relative merits of a stateful vs. stateless architecture have been debated. For a long time, we have been told that, for example, stateful session beans in EJB are evil, and that in order to scale-out a Web application you can not keep state in the web tier, but have to persist state in some sort of Service of Record (database, filesystem, etc.).

With the recent advent of Web 2.0, we are faced with new possibilities and requirements. Today, we can write web-based applications with the feature set of a rich-client that, at the same time, is so highly responsive that it gives the impression of being run locally. That brings back the importance of statefulness. We now have a new generation of Web frameworks such as RIFE, SEAM, Spring Web Flow, GWT, and DWR, that all focus on managing conversational state that can be bound to a variety of different scopes. In short, stateful web applications are back.

Still, the question remains: how can we scale-out, and ensure the high availability of a stateful application while also preserving the application's simplicity and semantics?

In this article, we will answer that question. We start out by explaining what RIFE's Web continuations are all about, the concept behind them, and how you can use them to implement clean, stateful conversational Web applications with minimal effort. Then we will discuss the challenges in scaling out a Web 2.0 stateful application, introduce you to Terracotta's JVM-level clustering technology, and show how you can use Terracotta in practice to scale out RIFE applications.

What are Continuations?

A continuation encapsulates the state and program location of an execution thread so that the execution state associated with the thread may be paused and then resumed at some arbitrary time later, and in any thread.

The easiest way to explain the concept is to draw an analogy with saving and loading a computer game. Most computer games let you store your progress while playing a game. Your location and possessions in the game will be saved. You can load this saved game as many times as you want and even create other ones based on that state later on. If you notice that you took a wrong turn, you can go back and start playing again from an earlier saved game.

You can think of continuations as saved games: anywhere in the middle of executing code, you can pause and resume your code afterwards.

Continuations that capture the entire program execution aren't very useful in practice, since they require that you shut down the running application before you can resume a previous continuation. In this multi-user world with lots of concurrency, this is clearly not acceptable. Undoubtedly, this is one of the reasons why continuations remained mostly an academic topic for the past thirty years. It was only when partial continuations started being used in web application development that the true power of the concept emerged for developers.

Partial continuations work by setting up a well-known barrier where the capture of program execution starts. Anything that executes before this barrier works independently of the continuation and always continues running. That independent context can, for example, be a servlet container. The partial continuation contains only the state built up from the barrier onwards - for example, a web framework action or component.

In practice, a partial continuation corresponds to what one particular user is doing in the application. By capturing that action in a partial continuation, the execution for that user can be paused when additional input is required. When the user submits the data, the execution can be resumed. The chain of continuations created this way builds up a one-to-one conversation between the application and the user, in effect providing the simplicity of single-user application development inside a multi-user execution environment.

Introduction to RIFE Continuations

The barrier for partial continuations in RIFE is set at the start of the execution of RIFE's reusable components, called elements. These combine both the benefits of actions and components by abstracting the public interface around the access path (URL) and data provision (query string and form parameters). This means that when an element is a single page, the data provision comes straight from the HTTP layer. However, when an element is nested inside another element, the data can be sent by any of the elements that are higher up on the hierarchy. This allows you to start out writing pages in an action-based approach and as you detect that you have reusable functionality, elements can be embedded inside others, turning them into components without having to code to another API.

When the element classes are loaded, they are analyzed, and when continuation instructions are detected in the code, the byte-code is modified to provide the continuation functionality. The most basic instruction is the pause() method call. This essentially corresponds to a "save game" command, to continue our earlier analogy.

The continuation created when this instruction is reached, will receive a unique ID and will be stored into a continuation manager. To make it possible for the user to interact with the application when his conversation has been paused, the user interface has to be setup beforehand. In a web application this means that all the HTML required to build the page has to be sent through the response to the browser. By using specialized tags with forms and links, RIFE automatically inserts the required parameters to remember the ID of the continuation.

When a user submits a form or clicks on a link that contains the ID of the continuation, this ID will be sent to the framework through a HTTP request. The framework then interacts with the continuation manager to retrieve the corresponding continuation. If a corresponding continuation is found, the entire continuation, including its state, is cloned, and that clone resumes the program execution with its own ID. The previous version of the continuation still exists: when a user presses the back button in his browser or uses the links or forms to return him to a previous state in the web conversation, the previous continuation will be resumed. This gracefully solves the typical back-button or multi-window problems of stateful web applications.

Let's now look at a trivial example that shows what the code looks like in practice. We will create a simple counter that remembers how many times the user has pressed a button. If the button has been pressed ten times, we'll print out a message saying so. This is the Counter.java source file that does what we just explained:

import com.uwyn.rife.engine.Element;
import com.uwyn.rife.engine.annotations.*;
import com.uwyn.rife.template.Template;

@Elem(submissions = {@Submission(name = "count")})
public class Counter extends Element {
  public void processElement() {
    int counter = 0;
    Template t = getHtmlTemplate("counter");
   
    while (counter < 10) {
      t.setValue("counter", counter);
      print(t);
      pause();
      counter++;
    }
   
    t.setBlock("content", "done");
    print(t);
  }
}

Before actually explaining what goes on, we will first provide the source code for counter.html:

<html>
<body>
  <r:v name="content"></r:v>
 
  <r:bv name="content">
    <p>Current count: <r:v name="counter"/></p>
    <form action="${v SUBMISSION:FORM:count/}" method="post">
      <r:v name="SUBMISSION:PARAMS:count"/>
      <input type="submit" value=" + " />
    </form>
  </r:bv>
 
  <r:b name="done">You pressed the button ten times.</r:b>
<body>
<html>

We won't go into the details of RIFE's template engine, and will simply highlight some of the important aspects so that this example makes more sense to you:

Let us now return to the Java implementation of the RIFE element. We're creating a counter that prints out a message when its value reached the number 10. The user is able to press a button that increases the counter by one. This has to be hooked up to the RIFE element through a form submission.

The class annotations declare that there is one piece of data submitted: count. We've shown how count is used in the template. Since this the only submission in the element, we don't need to detect which submission has been sent. The simple fact that one arrived at the element, will make RIFE use the default one.

In RIFE, submissions are always handled by the element they belong to. In the code above, the submission will simply cause the active continuation to be resumed. This happens since the SUBMISSION:PARAMS:count template tag automatically generates a hidden form parameter with the continuation ID. When the request with that ID is handled by RIFE, the corresponding continuation is looked up and resumed.

The above example uses a regular Java while loop to create the 'flow' of the application. With the pause() method call, the execution stops on the server side; meanwhile the user can interact through the browser with the HTML page that was generated before the pause(). When the execution is resumed on the server, the while loop continues, stopping only when the local counter variable reaches the value 10.

The advantage of this approach is that you can use regular Java statements to write the flow of your web application. You don't have to externalize application flow through a dedicated language. An additional benefit is that you can use all the Java tools to write and debug the entirety of your application. You can set breakpoints and watches to analyze the behavior of complex flows and step through to easily identify bugs or unexpected behavior. All the local state is also automatically captured and properly scoped and restored for one particular user.

To make state-handling easy, we don't impose serialization requirements on objects: objects are simply kept in the heap. This can present a problem when your application needs to be fault tolerant and scalable over multiple nodes. As we describe in the next three sections, the integration of Terracotta and RIFE brings enterprise scalability and high availability to native Java continuations, and to RIFE.

The Need for Scale-Out and High Availability

Predictable capacity and high availability are operational characteristics that an application in production must exhibit in order to support business. This basically means that an application has to remain operational for as long as the Service-Level Agreements require it to. The problem is that developing applications that operate in this predictable manner is just as hard at the 99.9% uptime level as it is at 99.9999%.

One common approach to address scalability has been by "scale-up": adding more power in terms of CPU and memory to one single machine. Today most data centers are running cheap commodity hardware and this fact, paired with increased demand for high availability and failover, instead implies an architecture that allows you to "scale-out:" add more power by adding more machines. And that implies the use of some sort of clustering technology.

Clustering has been a hard problem to solve. In the context of Web and enterprise applications, such as like RIFE, this in particular means ensuring high-availability and fail-over of the user state in a performant and reliable fashion. In case of a node failure - due to application server, JVM or hardware crash - enabling the use of "sticky session" in the load balancer won't help much. (Sticky sessions means that the load balancer always redirects requests from a particular user session to the same node.) Instead, an efficient way of migrating user state from one node to another node in a seamless fashion is needed.

Let us now take a look at a solution that solves these problems in an elegant and non-intrusive way.

Page 1 of 3  >>

Leading-Edge Java | Discuss | Print | Email | First Page | Previous | Next

Sponsored Links



Google
  Web Artima.com   
Copyright © 1996-2014 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use - Advertise with Us