The Artima Developer Community
Sponsored Link

Java Community News
How Early Should You Test for Performance?

24 replies on 2 pages. Most recent reply: Dec 8, 2006 6:53 AM by Markus Kohler

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 24 replies on 2 pages [ 1 2 | » ]
Frank Sommers

Posts: 2642
Nickname: fsommers
Registered: Jan, 2002

How Early Should You Test for Performance? Posted: Nov 30, 2006 1:37 PM
Reply to this message Reply
Summary
In an IBM DeveloperWorks article, Andrew Glover discusses how developers can test for performance early in the development cycle with the JUnitPerf tool. Realistic performance measurements from code in the early stages of a project may not be easy to obtain, however. Just how early in a project should you start performance and load testing?
Advertisement

The idea behind developer testing is that the earlier you discover problems in a project, and the quicker you fix them, the less undiscovered bugs will slow down progress.

In a recent installment to his IBM DeveloperWorks column on code quality, Performance testing with JUnitPerf, Andrew Glover argues that you should perform not only constant unit testing of code from the inception of a project, but should also frequently gauge the performance of new code:

Verifying application performance is almost always a secondary concern during application development... An application's performance is always a chief concern, but verification is rarely part of the development cycle. Performance testing is usually put off until later cycles for a variety of reasons. In my experience, businesses don't include performance testing in the development process because they don't know what to expect from the application in progress.

Glover then writes that several tools now allow one to perform performance testing as easily as they allow unit testing, and introduces one such tool, JUnitPerf:

You can create two types of tests using JUnitPerf: TimedTests and LoadTests. Both are based on the Decorator design pattern and utilize JUnit's suite mechanism.

TimedTests create a top-level bound for a test case—if this time is surpassed, then the test fails. LoadTests work in cooperation with timers and create an artificial load on a particular test case by running it a desired number of times separated by the configured timer.

With a TimedTest, you can tell JUnitPerf to either fail fast—fail if the specified time bound is exceeded, even if the test would otherwise succeed—or to run a test to completion. The latter case will then produce the timing of the actual test run.

Load testing with JUnitPerf executes a test a specified number of times, in different threads. An interesting type of test can combine load testing with performance testing, using a decorator pattern:

Decoration can happen on multiple levels, and so it is with JUnitPerf's TimedTests and LoadTests. When these two classes decorate each other, it leads to some compelling test scenarios, such as one where a load is placed on a business case and a time threshold is also applied. Or we could just combine the previous two test scenarios as follows:

  • Place a load on the testCreate() method.
  • Specify that every thread must finish within the time threshold.

One problem with the decorator pattern in the context of JUnitPerf, however, is that JUnitPerf itself is a decorator to JUnit tests, according Glover. While the benefit is that all JUnitPerf tests can be executed with any JUnit test runner, including those integrated with IDEs, it also means that performance numbers will include the JUnit test cases' overhead—for instance, the setUp() and tearDown() methods.

In your projects, how early do you start performance testing? And how closely do you think performance numbers obtained during development mirror those of the deployed application?


Leo Lipelis

Posts: 111
Nickname: aeoo
Registered: Apr, 2006

Re: How Early Should You Test for Performance? Posted: Nov 30, 2006 5:29 PM
Reply to this message Reply
LOL.

From a Wikipedia article:

* "More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity." - W.A. Wulf

* "Premature optimization is the root of all evil." - Hoare and Knuth

* "Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you have proven that's where the bottleneck is." - Rob Pike

* "The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet." - Michael A. Jackson

Todd Blanchard

Posts: 316
Nickname: tblanchard
Registered: May, 2003

Re: How Early Should You Test for Performance? Posted: Nov 30, 2006 7:03 PM
Reply to this message Reply
I have to agree.

Make it work.
Make it work right.
If its not fast enough, then profile it and make it faster.

Anjan Bacchu

Posts: 18
Nickname: anjanb
Registered: Mar, 2002

Re: How Early Should You Test for Performance? Posted: Nov 30, 2006 9:49 PM
Reply to this message Reply
> I have to agree.
>
> Make it work.
> Make it work right.
> If its not fast enough, then profile it and make it faster.

hi there,

It's easy to say to profile after getting it working. But when computers can work for 24 hours, it doesn't make much sense not to use them to keep doing some IMPORTANT USE CASE performance tests on a regular basis.

I'm NOT suggesting investing a lot of time early on. But if you would like to have peace of mind that you will hit the target on time and with expected performance, then it is imperative that you know what performance your architecture will provide you. You can do that either by having a
"CASE STUDY" or "PROOF OF CONCEPT"
OR
regular performance testing of important use cases.

Unless your application is trivial or you have a lot of hardware, more often than not, the performance of the system will ALMOST ALWAYS not be as good as needed.

My 2 cents

BR,
~A

Frank Sommers

Posts: 2642
Nickname: fsommers
Registered: Jan, 2002

Re: How Early Should You Test for Performance? Posted: Nov 30, 2006 10:27 PM
Reply to this message Reply
One question that comes to mind is how early performance testing can help define a system's architecture.

Most people have some performance requirements in mind when designing an application - for instance, the peak number of customers expected within a year, or the number of transaction a business expects in an hour. Few systems come with a requirement to scale infinitely - and even those systems can be developed in stages, each stage having a specific performance target.

A lot of architectural decisions are based on the perceived scalability of an architecture. Actual data in the context of an application can only help make better architectural decisions, I think.

It might help to spike a solution early, possibly with the simplest architecture for the job, and gather load and performance data. That data may even indicate that a seemingly more scalable architecture would amount to over-engineering a system, or that a type of architecture that was previously infeasible, is now able to meet the performance goals.

For instance, I've been recently working on a project using Rails. I chose Rails because I knew it would allow me to progress quickly. However, I was initially very skeptical of Ruby/Rails performance, and was concerned at first that Rails might not let me meet the performance goals for the project. I spiked up a quick use-case, and wrote a simple script to performance-test that use-case.

It turned out that the system performed very poorly at first. However, a bit more digging in the documentation showed that there were better ways of deploying this Rails application, and one of those ways provided far improved performance. So that early feedback helped me become more confident of using Rails, i.e., it helped me make the architecture decision.

David Medlock

Posts: 11
Nickname: dmedlock
Registered: Jun, 2006

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 7:35 AM
Reply to this message Reply
I tend to agree with Peter Norvigs analysis of optimization.

There are four basic strategies:

1. Indexing
2. Cacheing/memoization
3. Compiling/Hardware
4. Delaying computation (lazy evaluation or branch removal)

I don't see compelling reasons to put any of those early in the development process unless absolutely necessary.

Merriodoc Brandybuck

Posts: 225
Nickname: brandybuck
Registered: Mar, 2003

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 7:50 AM
Reply to this message Reply
> I have to agree.
>
> Make it work.
> Make it work right.
> If its not fast enough, then profile it and make it faster.


I have yet to see a complex system where performance can just be added on later like is was just another feature. Systems that need to perform well should be profiled early and often. I've been a part of enough "make it perform as well as you can in two weeks" projects to feel pretty strongly about that one.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 9:42 AM
Reply to this message Reply
David Medlock wrote I tend to agree with Peter Norvigs analysis of optimization. ...
And several other comments here also seem to be on the familiar talking point of premature optimization rather than what I took to be the subject - feasibility.

David Medlock

Posts: 11
Nickname: dmedlock
Registered: Jun, 2006

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 10:46 AM
Reply to this message Reply
> David Medlock wrote I tend to agree with Peter Norvigs
> analysis of optimization. ...

> And several other comments here also seem to be on the
> familiar talking point of premature optimization rather
> than what I took to be the subject - feasibility.

I think the point still applies.

When you ask 'Is X fast enough? or Is X faster than Y?' the answer always depends on what you are doing(i.e. its app specific).

When designing a ray tracer versus a 3d shooter the optimizations differ but how are you supposed to know how much time *each component* takes to do something before you know the bottlenecks? This is especially true of components which will have varying I/O requirements from use to use.

Unit testing is nice because you can specifically ensure that corner cases are taken care of. Timings of components aren't as clear-cut.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 11:38 AM
Reply to this message Reply
Perhaps I misunderstood what the point was - isn't I don't see compelling reasons to put any of those early in the development process unless absolutely necessary just begging the question?

David Medlock

Posts: 11
Nickname: dmedlock
Registered: Jun, 2006

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 12:06 PM
Reply to this message Reply
> Perhaps I misunderstood what the point was - isn't I
> don't see compelling reasons to put any of those early in
> the development process unless absolutely necessary

> just begging the question?

How is what I posted circular logic or assumed conclusions?
http://www.nizkor.org/features/fallacies/begging-the-question.html

The post says: Here is a piece-meal tool for automated optimization testing.

I say: *I* don't see compelling reasons to integrate it into my (early)development process.

I simply gave my opinion on the matter, is that not what this forum is for?

Cameron Purdy

Posts: 186
Nickname: cpurdy
Registered: Dec, 2004

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 12:40 PM
Reply to this message Reply
> I tend to agree with Peter Norvigs analysis of
> optimization.
> There are four basic strategies:
> 1. Indexing
> 2. Cacheing/memoization
> 3. Compiling/Hardware
> 4. Delaying computation (lazy evaluation or branch
> removal)
> I don't see compelling reasons to put any of those early
> in the development process unless absolutely necessary.

Those are not the strategies that I think of when I think of building optimal software. Here are my only two:

1) An architecture that well-fits the task at hand, and
2) Data structures that match the task at hand and provide deterministic performance for the use cases for which they will be used.

These two, more than anything, will allow for the "global maximum" to be solved for. If you don't get these right, no amount of "optimization" will get you there.

Peace,

Cameron Purdy
http://www.tangosol.com/

Merriodoc Brandybuck

Posts: 225
Nickname: brandybuck
Registered: Mar, 2003

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 12:59 PM
Reply to this message Reply
> > I tend to agree with Peter Norvigs analysis of
> > optimization.
> > There are four basic strategies:
> > 1. Indexing
> > 2. Cacheing/memoization
> > 3. Compiling/Hardware
> > 4. Delaying computation (lazy evaluation or branch
> > removal)
> > I don't see compelling reasons to put any of those
> early
> > in the development process unless absolutely necessary.
>
> Those are not the strategies that I think of when I think
> of building optimal software. Here are my only two:
>
> 1) An architecture that well-fits the task at hand, and
> 2) Data structures that match the task at hand and provide
> deterministic performance for the use cases for which they
> will be used.
>
> These two, more than anything, will allow for the "global
> maximum" to be solved for. If you don't get these right,
> no amount of "optimization" will get you there.
>
> Peace,
>
> Cameron Purdy
> http://www.tangosol.com/

I hope I'm not the only one here that has had to change an architecture or data structures to solve a problem. It is quite possible that the best theoretical architecture (the one that intuitively models the problem domain) is a poor fit for actually solving the problem due to the fact that we do not have machines with infinite memory and can perform any calculation instantaneously. I don't think this is as common an occurence as it once was given the leaps in available processing power and memory that have been available to software writers over the past decade or so and I certainly don't advocate throwing a bunch of horrid hacks together to make something as fast or memory efficient as possible without regard to maintainability and extensibility, but there are still cases where getting the problem solved means solving it in a manner that may not be the most "elegant".

I think one only needs to look at the origins of Duff's Device to see a real world example of this. Granted it's over two decades old but the main thrust of the reason to invent it still holds.

"Somebody invoked (or more properly, banished) the `false god of efficiency.' Careful reading of my original note will put this slur to rest. The alternative to genuflecting before the god of code-bumming is finding a better algorithm. It should be clear that none such was available. If your code is too slow, you must make it faster. If no better algorithm is available, you must trim cycles."

from http://www.lysator.liu.se/c/duffs-device.html

Leo Lipelis

Posts: 111
Nickname: aeoo
Registered: Apr, 2006

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 4:05 PM
Reply to this message Reply
Ok, in my own words now. I think generally you should not optimize at all or only optimize after you know pretty much everything about the system you are developing and the concrete use cases that you will optimize for.

However, in the real world, what can happen is just horribly bad design that can't even be optimized.

What "performance testing early" strategy can do is catch the horrible design early and it can give you a peace of mind if you use it judiciously and avoid breaking the best practices of optimization.

I'd say it's not needed for most apps, but then, what do I know?

The exception from this would be a system designed specifically for performance and with performance as its main design objective -- for example -- a real time system. If you have a real time system, it should probably be blindingly fast throughout the entire development life cycle.

If you avoid something really wacky, like loading two tables into memory and then doing a join between them in your application (something you should let database do for you) and other such "fruitcake" things, then you shouldn't need to worry about early optimization. But then again, how do you make sure all the people on your team understand what's going on? There is no way. So, if you don't trust your team much, then it might be a good idea to test for performance early while still keeping in mind why those luminaries said what they said about optimization.

Merriodoc Brandybuck

Posts: 225
Nickname: brandybuck
Registered: Mar, 2003

Re: How Early Should You Test for Performance? Posted: Dec 1, 2006 6:43 PM
Reply to this message Reply
> Ok, in my own words now. I think generally you should not
> optimize at all or only optimize after you know pretty
> much everything about the system you are developing and
> the concrete use cases that you will optimize for.
>

I don't think anybody disagrees with that.

> However, in the real world, what can happen is just
> horribly bad design that can't even be optimized.
>

Seen it, done it, hope to never do it again.

> What "performance testing early" strategy can do is catch
> the horrible design early and it can give you a peace of
> mind if you use it judiciously and avoid breaking the best
> practices of optimization.
>
> I'd say it's not needed for most apps, but then, what do I
> know?
>

In this day and age it is very easy to test for performance in almost any runtime with a good profiler.

> The exception from this would be a system designed
> specifically for performance and with performance as its
> main design objective -- for example -- a real time
> system. If you have a real time system, it should
> probably be blindingly fast throughout the entire
> development life cycle.
>

The system I'm specifically thinking of was responsible for rolling up financial data. It certainly isn't a real time app and performance wasn't its main criteria, but this thing is now over a decade old and has had all kinds of wacky operations bolted on to it. In the early days nobody bothered testing performance and now it runs very slowly in certain cases. There are a couple of really talented people who work on it and make it perform as best they can, but to get it to run some processes at speeds some customers desire would require almost a year worth of work according to some of the developers estimates. Management doesn't want to pay this cost hence the directives for "do what you can in two weeks" as if rolling up a couple hundred gigabyte database were the same as adding a blue button to a windows form.

So while performance isn't the main design objective, it is an important consideration. One that could have been addressed as additions were made while features were being added, but performance was never a concern along the way. It is only an issue now that customers are complaining, but since the dev team wasn't allowed to give performance its due consideration during development and since management doesn't want to pay to get performance to where it could be with a concentrated effort, almost every release in the past few years has had these half hearted attempts to try and squeeze a little bit of performance out.

> If you avoid something really wacky, like loading two
> tables into memory and then doing a join between them in
> your application (something you should let database do for
> you) and other such "fruitcake" things, then you shouldn't
> need to worry about early optimization. But then again,
> how do you make sure all the people on your team
> understand what's going on? There is no way. So, if you
> don't trust your team much, then it might be a good idea
> to test for performance early while still keeping in mind
> why those luminaries said what they said about
> optimization.


If anything testing for performance will help prevent premature optimization. Measuring what you currently have is a whole different ball of wax than trying to make something better. Without any data, how can you know whether you do or don't need to spend any time on optimization? Just about everybody agrees that finding defects earlier rather than later is good because fixing defects earlier is much cheaper. Why not take the same view with an application's performance?

Unfortunately I have worked with some developers that seem to think "premature optimization is evil" means "I don't need to give a rats ass about how my application actually works". I don't think those luminaries thought you should take that piece of wisdom and use it as a license to ben an idiot, yet that seem to be how some people interpret it.

In my mind optimizing a system involves tweaking an already adequate system to squeeze something more out of it if you need to. Measuring performance along the way and correcting gross inefficiencies isn't premature optimization. It's making the application useable in the first place.

Flat View: This topic has 24 replies on 2 pages [ 1  2 | » ]
Topic: JFormDesigner 3.0 Released Previous Topic   Next Topic Topic: Direct Web Remoting 2 Releases RC1

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use