The Artima Developer Community
Sponsored Link

Java Buzz Forum
Millisecond accuracy in Java

0 replies on 1 page.

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 0 replies on 1 page
Simon Brown

Posts: 636
Nickname: simonbrown
Registered: Jun, 2003

Simon Brown is a Java developer, architect and author.
Millisecond accuracy in Java Posted: Aug 21, 2007 4:13 AM
Reply to this message Reply

This post originated from an RSS feed registered with Java Buzz by Simon Brown.
Original Post: Millisecond accuracy in Java
Feed Title: Simon Brown's weblog
Feed URL: http://www.simongbrown.com/blog/feed.xml?flavor=rss20&category=java
Feed Description: My thoughts on Java, software development and technology.
Latest Java Buzz Posts
Latest Java Buzz Posts by Simon Brown
Latest Posts From Simon Brown's weblog

Advertisement

I'm about to start a short consulting engagement where we need to performance test a low latency trading system. By low latency, I mean that messages need to flow through the system in under 50ms.

Performance testing work throws up lots of potential issues such as whether you can get access to accurate timestamps, whether system clocks are synchronised, etc. Another such issue is whether you can measure the time taken to make a request in an accurate way.

Let's say that you want to measure the time taken to make a synchronous request to a remote resource and measure how long that request takes. Additionally, let's say that you want to do this under load, simulating various numbers of concurrent users/sessions. One technical solution to this problem is to use something like JMeter to graph the response times across a varying load. Alternatively, you could write something bespoke. However you do it, you need to be sure that you can measure time as accurately as possible.

If you're writing a test harness in Java, you can use System.currentTimeMillis() or System.nanoTime() (Java 5) to get an accurate measurement of the current time. However, if you're going to do this, it's worth reading the Javadocs for each of these methods because they don't guarantee millisecond accuracy. From System.currentTimeMillis() :

Returns the current time in milliseconds. Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.

So how accurate is it? On Windows, System.currentTimeMillis() doesn't give you the current time to an exact 1ms resolution because of the way that the Windows system clock works. To demonstrate this, I wrote a simple Java program (download as an executable JAR file) that collects the current time for a short period and then displays a consolidated view of the results. The output below shows the raw time in milliseconds, the human formatted version, the number of times System.currentTimeMills() returned that same time and the delta from the previous time.

raw=1187645263093 | formatted=20-Aug-2007 22:27:43:093 | frequency=107567 | delta=15ms
raw=1187645263109 | formatted=20-Aug-2007 22:27:43:109 | frequency=107808 | delta=16ms
raw=1187645263125 | formatted=20-Aug-2007 22:27:43:125 | frequency=103450 | delta=16ms
raw=1187645263140 | formatted=20-Aug-2007 22:27:43:140 | frequency=231928 | delta=15ms
raw=1187645263156 | formatted=20-Aug-2007 22:27:43:156 | frequency=229545 | delta=16ms

As you can see, Windows tends to provide a clock resolution of about 15ms. I ran this on a couple of reasonably spec'd Windows XP and Windows Server 2003 boxes, and using the Sun and BEA JVMs. However, running the same program on Redhat (2.6.9 kernel) and Mac OS X (10.4.x, PPC G4 and Intel Core Duo) gave the following results.

raw=1187645137371 | formatted=20-Aug-2007 22:25:37:371 | frequency=3302 | delta=1ms
raw=1187645137372 | formatted=20-Aug-2007 22:25:37:372 | frequency=3282 | delta=1ms
raw=1187645137373 | formatted=20-Aug-2007 22:25:37:373 | frequency=3295 | delta=1ms
raw=1187645137374 | formatted=20-Aug-2007 22:25:37:374 | frequency=2272 | delta=1ms
raw=1187645137375 | formatted=20-Aug-2007 22:25:37:375 | frequency=3293 | delta=1ms

These simple tests show that some platforms do provide millisecond accuracy. System.nanoTime() is an alternative but this has it's own *additional* problems - I've found the actual call to be slower and the time returned is relative to a fixed but arbitrary time. My initial reaction was that Java is no good for performance testing, but I take that back and make the following recommendations instead.

  • If you need to measure latencies in a "low latency" system, you need to do this on a platform that has an accurate clock resolution. Check your platform does provide millisecond accuracy before you start testing.
  • Don't try to measure accurate response times/latencies on the Windows platform, unless you've tweaked your OS.
  • Don't use Windows to generate load if part of your test data/request includes a millisecond accurate timestamp that you want to pass around the system.

Basically, don't use Windows. ;-)

Read: Millisecond accuracy in Java

Topic: HSQLDB Cached Table Versus Memory Table Performance & Conversion Previous Topic   Next Topic Topic: links for 2007-08-13 from PeopleOverProcess.com

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use