This post originated from an RSS feed registered with Java Buzz
by Simon Brown.
Original Post: Millisecond accuracy in Java
Feed Title: Simon Brown's weblog
Feed URL: http://www.simongbrown.com/blog/feed.xml?flavor=rss20&category=java
Feed Description: My thoughts on Java, software development and technology.
I'm about to start a short consulting engagement where we need to performance test a low latency trading system. By low latency, I mean that messages need to flow through the system in under 50ms.
Performance testing work throws up lots of potential issues such as whether you can get access to accurate timestamps, whether system clocks are synchronised, etc. Another such issue is whether you can measure the time taken to make a request in an accurate way.
Let's say that you want to measure the time taken to make a synchronous request to a remote resource and measure how long that request takes. Additionally, let's say that you want to do this under load, simulating various numbers of concurrent users/sessions. One technical solution to this problem is to use something like JMeter to graph the response times across a varying load. Alternatively, you could write something bespoke. However you do it, you need to be sure that you can measure time as accurately as possible.
If you're writing a test harness in Java, you can use System.currentTimeMillis() or System.nanoTime() (Java 5) to get an accurate measurement of the current time. However, if you're going to do this, it's worth reading the Javadocs for each of these methods because they don't guarantee millisecond accuracy. From System.currentTimeMillis() :
Returns the current time in milliseconds. Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
So how accurate is it? On Windows, System.currentTimeMillis() doesn't give you the current time to an exact 1ms resolution because of the way that the Windows system clock works. To demonstrate this, I wrote a simple Java program (download as an executable JAR file) that collects the current time for a short period and then displays a consolidated view of the results. The output below shows the raw time in milliseconds, the human formatted version, the number of times System.currentTimeMills() returned that same time and the delta from the previous time.
As you can see, Windows tends to provide a clock resolution of about 15ms. I ran this on a couple of reasonably spec'd Windows XP and Windows Server 2003 boxes, and using the Sun and BEA JVMs. However, running the same program on Redhat (2.6.9 kernel) and Mac OS X (10.4.x, PPC G4 and Intel Core Duo) gave the following results.
These simple tests show that some platforms do provide millisecond accuracy. System.nanoTime() is an alternative but this has it's own *additional* problems - I've found the actual call to be slower and the time returned is relative to a fixed but arbitrary time. My initial reaction was that Java is no good for performance testing, but I take that back and make the following recommendations instead.
If you need to measure latencies in a "low latency" system, you need to do this on a platform that has an accurate clock resolution. Check your platform does provide millisecond accuracy before you start testing.
Don't try to measure accurate response times/latencies on the Windows platform, unless you've tweaked your OS.
Don't use Windows to generate load if part of your test data/request includes a millisecond accurate timestamp that you want to pass around the system.