From JavaOne 2009: Breaking Through JVM Memory Limits

A Conversation with Azul's Gil Tene

by Frank Sommers
June 1, 2009

Summary
In this interview with Artima, Gil Tene, CTO and co-founder of Azul Systems, explains why Java applications typically use only a few gigabytes of memory, out of possibly tens of gigabytes available on commodity servers.

While the declining price of memory allowed server makers to significantly increase the available RAM on commodity servers, a typical Java application uses about as much memory today as it did at the beginning of the decade, says Gil Tene, co-founder and CTO of Azul Systems, in this interview with Artima:

When you look back since the start of this decade, you can observe that the memory sizes of individual instances of applications haven't really grown that much. In the year 2000, it was already common to see applications with a gigabyte, or slightly more, of memory. Applications with that size memory footprint were already practical. It was normal, for example, to deploy a WebLogic server with a gigabyte of heap on a commodity server with 2-3GB of physical memory.

Today, we still see applications using 1GB, or slightly more, of RAM. With this much time having passed, you would expect there to be a lot more memory used by now. But that hasn't happened: The practical size for a single [application] instance hasn't changed since about 2000.

To see why that's an anomaly, you have to look at it from a historical perspective. Moore's Law gives us an additional 2X of the number of transistors on a chip every 18 months. That has worked out very well for the past 40 years or so: You buy roughly twice the memory for the same price every 18 months.

Over a decade, twice every 18 months works out to be just a little over 100x every decade. If you look back at the amount of memory used by applications over the past thirty years, you'll see that they tended to take advantage of that trend.

It was normal in the early 80s, for instance, to have applications with about 100KB of memory running on servers that had half a megabyte of physical memory. If you had a problem of that size, then you'd fit that nicely into memory, and wouldn't bother to build a distributed system or some other complex solution.

In the early 90s, it was normal to solve 10MB problems on servers that had 32 or 64MB of memory—an HP or Sun server. You could get your boss to buy one of those servers for the department and solve the problem on a single server.

Move another decade forward, to the early 2000s, and a gigabyte heap on an entry-level server would be very normal. In other words, you could solve a gigabyte [size] problem on a single server instance by then.

If you look at those three decades, you can see a jump of about 100x each decade. That turns out to mirror Moore's Law in terms of memory capacity. In other words, applications kept up with the growing capacity of memory in terms of how much memory a single application consumed.

The software we're working on today, ten years later, assumes about the same amount of memory for each application instance, or just slightly more, than it did at the beginning of the decade. Not many applications are written with the assumption that there will be 100GB or more RAM available to the application. So there is a flattening of the curve there. There is now a decade of stagnation.

This had led to people trying to solve things in funny ways: For instance, building distributed systems just to handle a few tens of gigabytes-size problem. That has led to more and more complex systems. Luckily, in the Java world, tools have developed that make it easier to solve problems that can be managed by a single instance. But is it really necessary to go that route, where Moore's Law would suggest we can solve problems of the 100GB size in a single memory address space?

Of course, clustering, lateral scaling, and so on, are used not only to extend the memory space of a single application, but also to provide failover, redundancy, and so. Still, in many cases, they led to complex systems. That's especially the case when a commodity server in 1-2 years will have over 200GB of available physical RAM. Yet, [the fact] that individual application instances still use only a fraction of that memory, shows that something's amiss in this picture.

Environments like Java or .NET are prevalent in part because they manage the environment in which an application runs. Those managed environments make development simpler which, in turn, allows programmers to create complex applications that, in principle, use a lot of memory.

However, the way VMs have been implemented is a strong factor in holding us back from breaking past a handful of gigabytes. Specifically, there are two behaviors of virtual machines that play a role here. One is garbage collection of large amounts of memory. The other issue is being able to keep up with large rates of allocation. These, in fact, go hand in hand.

The metric for the first one is how many gigabytes of memory you can have without running into problems with the GC. The other metric is how many megabytes you can allocate in a given time period. Naturally, if you want to use lots of memory, you have to be able to allocate a lot of memory quickly, and with a high throughput. When either of those metrics fails to support the use of large amounts of memory, then you are forced to start thinking about using only a fraction of the memory available.

So [the amount of memory a Java application can take advantage of] is really a factor of the VM. And that's where we stand now. We know that most VMs tend to work well with one or two gigabytes, and when you go above that, they don't immediately break, but you end up with a lot of tuning, and at some point you just run out of tuning options.

Given these issues, we [at Azul] spent the last 6 years focusing on the scalability of the individual JVM. At that time, the problem was not obvious to a lot of people, because we were still early in the curve of lots of physical memory becoming available on servers. Since then, this has become a mainstream issue and, consequently, our solution has received a lot of interest.

Our JVMs are designed to allow Moore's Law to continue for individual JVMs. With our JVMs, you can easily use hundreds of gigabytes of RAM, and allocate that RAM at the rate of tens of gigabytes per second, within a single JVM.

Our VM presents itself to developers like any regular JVM would. We ship our JVM for Linux, Solaris, HP-UX, AIX, and Windows. You just install this JVM on any of those environments. When you execute our VMs, though, rather than the VM being launched on the stack on which it was invoked, it goes out on the network and finds the Azul device that's built to bypass the limitations I just talked about. That device is really what powers the JVM. That there is a separate device [to run the Java application] is transparent to the application, however: You don't have to make any changes to your application to take advantage of Azul's VM. While we're very proud of this device, it's really invisible. It's something that simply enhances your environment, and helps you overcome the memory limitations that have plagued Java applications for many years now.

What do you think of Azul's approach to scaling the amount of memory a Java application can take advantage of?

Post your opinion in the discussion forum.

Talk back!

Have an opinion? Readers have already posted 1 comment about this article. Why not add yours?

About the author

Frank Sommers is an editor with Artima Developer. He is also founder and president of Autospaces, Inc., a company providing collaboration and workflow tools in the financial services industry.