The Artima Developer Community
Sponsored Link

Java Community News
How Early Should You Test for Performance?

24 replies on 2 pages. Most recent reply: Dec 8, 2006 6:53 AM by Markus Kohler

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 24 replies on 2 pages [ « | 1 2 ]
Mark Thornton

Posts: 275
Nickname: mthornton
Registered: Oct, 2005

Re: How Early Should You Test for Performance? Posted: Dec 2, 2006 1:29 AM
Reply to this message Reply
Advertisement
In my work it is best to start tests during the initial design and repeat often throughout development. My work is in vehicle routing and algorithms to find the best routes and loading of vehicles. Not a very common field so my experience is probably atypical.
One downside to improving performance is that when you succeed in increasing the practical problem size from say 1500 to 2500, the salesman will come back with a prospect with a problem size of 3500.

One mistake I made on a different type of project (specialised database) was not testing the performance of NIO and memory mapped files soon enough.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: How Early Should You Test for Performance? Posted: Dec 2, 2006 10:22 AM
Reply to this message Reply
> > Perhaps I misunderstood what the point was - isn't I
> > don't see compelling reasons to put any of those early
> in
> > the development process unless absolutely necessary

> > just begging the question?


David Medlock wrote
> How is what I posted circular logic or assumed
> conclusions?
> http://www.nizkor.org/features/fallacies/begging-the-questi
> on.html
>
> The post says: Here is a piece-meal tool for automated
> optimization testing.
>
> I say: *I* don't see compelling reasons to integrate it
> into my (early)development process.

You've removed the unless absolutely necessary - I think the necessity (or not) is the question we're trying to answer.

Peter Hickman

Posts: 41
Nickname: peterhi
Registered: Mar, 2003

Re: How Early Should You Test for Performance? Posted: Dec 2, 2006 3:07 PM
Reply to this message Reply
What we might be missing here is the chance to pick up some metrics on our code. If we were to run a full set of tests every-time someone checked in or perhaps just once a night we could be taking and comparing measurements with the previous runs.

This would allow us to flag up performance hits before changing the code base becomes too expensive and we would have a pretty good idea what changes affected the performance.

Although premature optimisation is quite clearly a sin doing optimisation when to code is, as good as, complete is equally sinful. Hitting bugs and design flaw early is cheaper than later, catching performance issues sooner will be cheaper than later.

The real question then becomes would we be even able to see these problems with unit tests - I suspect not. They are small one shot tests and unless the code was to become significantly slower most changes would just disappear into the noise. We would need tests that exercised the code in ways that would be used live - not the sort of tests that you would want to run on each check in and tests that we might not even be able create until we are quite a way down the development.

All very interesting, I'm sure that there are some Phds just begging to be written on this topic.

Even if it is not feasible we should be getting away from the 'do something about performance once we have finished it' mindset, just as we are getting away from the 'fix the bugs once we have got it working' mindset.

Peter Booth

Posts: 62
Nickname: alohashirt
Registered: Aug, 2004

Re: How Early Should You Test for Performance? Posted: Dec 2, 2006 10:06 PM
Reply to this message Reply
I am surprised by how religious many of the responses to the original post are. It seems that whenever someone mentions design-time consideration of performance issues that another person will knee reply with the "premature opmization is the root of all evil."

I think this is an oversimplification.

One of the things I enjoy about code profiling or load testing is how frequently it uncovers functional errors or identifies the difference between how the code actually works and how it is believed to work.

My experience suggests that the real sin that underlies the "premature optimization" quote is in fact "wrong-headed misunderstandings and attachment to performance myths is the root of all evil." I know that when I look at a new code-base there is a better than 50% chance that any code labelled "fast","cache","queue","pool" is a probable performance hole. The creation of these performance holes isn't premature optimization it's naive, ineffective attempts at optimization programming.

It is an unrealistic cop-out to pretend that we can defer performance issues to the end of a project. Every minor architectural decision will have a performance dimension to it. If those decisions are made in the presnece of better data then we will get better decisions. There is no downside to architects/developers knowing more about an applications dynamic behavior. Logging performance metrics at the beginning of a dev cycle could improve architectural quality.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: How Early Should You Test for Performance? Posted: Dec 3, 2006 10:51 AM
Reply to this message Reply
Peter Hickman wrote ... we should be getting away from the 'do something about performance once we have finished it' mindset, just as we are getting away from the 'fix the bugs once we have got it working' mindset.

Well said.

Our ambition would be know much more about the characteristics of our software product as it is being developed.

There are regression benchmarks for some products
http://nenya.ms.mff.cuni.cz/projects/mono/index.phtml#fft_scimark

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: How Early Should You Test for Performance? Posted: Dec 3, 2006 10:55 AM
Reply to this message Reply
Peter Booth wrote ... There is no downside to architects/developers knowing more about an applications dynamic behavior.

Well said.

V.H.Indukumar

Posts: 28
Nickname: vhi
Registered: Apr, 2005

Re: How Early Should You Test for Performance? Posted: Dec 5, 2006 8:54 AM
Reply to this message Reply
As early as possible. I often wonder what do they really mean when they say 'premature optimization is evil'. It is such a generic statement prone to mis-understanding! (I think this statement itself would qualify for "the root of all perfomance evils"!) In my experience, most of the performance bottlenecks have either been due to the lack of understanding how the the application actually works or bad architectural decisions which makes it difficult to have good performance. So what is evil about in gaining knowledge about how your application actually works and using that knowledge? Testing for performance early would give you valuable feedback on the performance and the architectural problems that you face. It would greatly increase your understanding of the application and would also enable you to design better with the newly gained information.

nes

Posts: 137
Nickname: nn
Registered: Jul, 2004

Re: How Early Should You Test for Performance? Posted: Dec 6, 2006 8:30 AM
Reply to this message Reply
I think many people get "Premature optimization is the root of all evil" wrong. It is good as a general rule of thumb but can be misunderstood as too universal in its application. In my opinion the common mistakes that this statement is trying to address is:

1. Optimizations made based on tradition: the programmer reads an article or comment somewhere that idiom x is faster than idiom y and therefore proceeds to replace all uses of x with y. Idiom y is oftentimes unnecessary complex, error prone etc. Oftentimes the speed benefit is tied to a specific group of situations that might not apply in his case, or it is specific to a version of a compiler where usually the new versions that comes out after the publication of the article will automatically replaces idiom x by some optimized version. It also ignores Amdahl’s law saving 1ms on x while the program is wasting 1000ms somewhere else.

2. Optimizations done based on the intuition of the programmer: sometimes the programmer thinks that a certain strategy will pay off based on previous experience or knowledge of how certain algorithms ought to perform; he often ends up being wrong. The hardware architecture or compiler might be slightly different than last time or maybe that library is not implementing the algorithm as he thought it should. Even People with experience in optimizing programs get routinely surprised by where bottlenecks are.

3. Micro optimizations that obfuscate the code and make it difficult to understand: often, abundance of micro optimizations complicate the code enough to obscure possible macro optimizations. When the code is inefficient but very simple it is easier to try to come up with a faster alternative approach to solving whole problem.

4. Optimization before feasibility: Code gets written and a lot of effort is spent in optimizing its parts when it is not clear that the approach to solving the problem is even possible or will return correct results. Alternatively, a lot of optimization effort is done before knowing if the user will actually end up using this feature enough times.

Optimizations based on sound profiling data, eliminates problems 1 and 2. Problem 3 can be avoided if one is able to keep the big picture in mind while looking for optimization opportunities. Problem 4 is solved by doing a proof of concept before committing oneself to a solution.

If one takes these points into account I see automated performance testing in conjunction with unit testing as positive. I specially like the fact that it kills optimizations based on personal opinion instead of reality, as is unfortunately so often the case.

Jeff Langr

Posts: 6
Nickname: jlangr
Registered: Oct, 2003

Re: How Early Should You Test for Performance? Posted: Dec 6, 2006 12:54 PM
Reply to this message Reply
> LOL.
>
> * "Premature optimization is the root of all evil." -
> ." - Hoare and Knuth

The original post didn't say anything about optimizing, it talked about profiling. That's a different matter entirely.

Performance is a requirement, so what I find odd is the article's emphasis on class (unit) level performance testing. That's something you do to baseline a class's performance once you determine it to be a sink. If you want to test performance or load, it's at the level of integrated software.

Regardless, the lesson I've learned, several times now, is that if you have any notion of your performance and load requirements, put tests that represent them in place as soon as possible. "We expect to scale to 500 stores and all transactions must execute in < 0.5s."

What appears to be good design isn't always the most adaptable to load/performance optimization. You don't want to find this out the hard way after trying to scale a production installation from a dozen clients to 120. It's usually not that easy.

Markus Kohler

Posts: 1
Nickname: nil
Registered: Jan, 2003

Re: How Early Should You Test for Performance? Posted: Dec 8, 2006 6:53 AM
Reply to this message Reply
Hi,
> I am surprised by how religious many of the responses to
> the original post are. It seems that whenever someone
> mentions design-time consideration of performance issues
> that another person will knee reply with the "premature
> opmization is the root of all evil."
>
> I think this is an oversimplification.

Agreed.

>
> One of the things I enjoy about code profiling or load
> testing is how frequently it uncovers functional errors or
> identifies the difference between how the code actually
> works and how it is believed to work.
>
> My experience suggests that the real sin that underlies
> the "premature optimization" quote is in fact
> "wrong-headed misunderstandings and attachment to
> performance myths is the root of all evil." I know that
> when I look at a new code-base there is a better than 50%
> chance that any code labelled
> "fast","cache","queue","pool" is a probable performance
> hole. The creation of these performance holes isn't
> premature optimization it's naive, ineffective attempts at
> optimization programming.
Agreed, again.

>
> It is an unrealistic cop-out to pretend that we can defer
> performance issues to the end of a project. Every minor
> architectural decision will have a performance dimension
> to it. If those decisions are made in the presnece of
> better data then we will get better decisions. There is no
> downside to architects/developers knowing more about an
> applications dynamic behavior. Logging performance metrics
> at the beginning of a dev cycle could improve
> architectural quality.

Very well said. In fact in case an early design decision cause a performance problem if a you are near the end of your project it can become impossible to revert that design decision.

I also found that in large scale java applications, it becomes difficult to find problems at the end, because the tools (profilers) available today cannot just handle the amount of data. Finding a problem in some basic data structure on a huge application server can be several times more costly, then just running a simple enhanced junit tests that spits out performance relevant data.

Regards,
Markus(http://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/u/6389)

Flat View: This topic has 24 replies on 2 pages [ « | 1  2 ]
Topic: JFormDesigner 3.0 Released Previous Topic   Next Topic Topic: Direct Web Remoting 2 Releases RC1

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use