The Artima Developer Community
Sponsored Link

Weblogs Forum
To new, or not to new, which is the best one?

5 replies on 1 page. Most recent reply: Apr 21, 2005 1:49 AM by Adi Shavit

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 5 replies on 1 page
Matthew Wilson

Posts: 145
Nickname: bigboy
Registered: Jun, 2004

To new, or not to new, which is the best one? (View in Weblogs)
Posted: Apr 19, 2005 3:31 AM
Reply to this message Reply
Summary
Do all C++ applications need to perform overload of operators new and delete? Does failure to do so indicate a naïve presumption on the part of the programmer of goodness in the language facilities provided by your compiler?
Advertisement

In an early draft of the important new update to his eponymous Effective C++ - the 3rd Edition is due out in May! - Scott Meyers' said that "In my experience, almost all nontrivial C++ applications contain custom implementations of new and delete." He and I had one of our customary "heated debates" on this issue - mainly because I was concerned that, coming from Scott, it could be interpreted to mean that customising new and delete is a sine qua non of all sophisticated C++ development - the result of which was that we agreed to differ. (For a change <g>)

Not seeking to stir that particular pot again, I am nonetheless interested in finding out how well Scott's experience tallies with the wider experience of the C++ community. For my part, I've certainly had occasion to implement custom memory management schemes - both out of interest and out of necessity - but the necessity has only been in high-throughput financial/comms systems, and that's only amounted to around 10-20% of the projects I've worked on.

I'm also interested in whether, if this is indeed the general experience of C++ software developers, it might represent a rather damning appraisal of the memory managers that ship with most/all compilers. And if so, could this be construed to mean that there's something wrong with C++ per se, which we all know is patently false (albeit that it does contain the odd imperfection)? Alternatively, as Scott sagely observes, C++ is used in such a diverse array of application areas that compiler library implementors find themselves in something of an uncomfortable position: "if everybody is demanding in lots of different ways, they're not going to approach optimal for anybody, hence the need to customize".

So wodayafink?

Thanks in advance


Girts Kalnins

Posts: 23
Nickname: smejmoon
Registered: May, 2003

Re: To new, or not to new, which is the best one? Posted: Apr 19, 2005 3:30 PM
Reply to this message Reply
For a few years we have three kinds of C++ code.
- generated (Program to generate it is written in Python);
- written as small libraries to be called from Python;
- prototypes, tests;

And no overloading of new,delete.

Jon Marshall

Posts: 1
Nickname: jondreads
Registered: Apr, 2005

Re: To new, or not to new, which is the best one? Posted: Apr 20, 2005 2:56 AM
Reply to this message Reply
The performance of the news and deletes shipped with most compilers is poor for multi-threaded applications. Usually it's implemented as one heap with one mutex controlling access, so it becomes a bottleneck and limits the scalability of multi-threaded applications on multi-processor machines.

In the past I've used alternative allocators such as Hoard to improve performance.

Matt Gerrans

Posts: 1153
Nickname: matt
Registered: Feb, 2002

Re: To new, or not to new, which is the best one? Posted: Apr 20, 2005 8:28 AM
Reply to this message Reply
Way back in the old days (about 15 years ago - at SPC (Software Publishing Company, for those youngsters who have never even heard of it)), I overloaded new and delete. It was mainly to add some debugging features and use the Windows memory allocation routines. It was quite fruitful, too, helping to quickly find memory leaks, heap overwrites and such problems in a fraction of the time it usually took. I later discovered SmartHeap (http://www.microquill.com/), which does the same kind of thing and more. Also, lots of these kinds of features are built into the compilers these days. Since that time, I've never needed to do it again.

Perhaps statements to the effect that "all non-trivial projects overload the allocation operators" drive people to do so unnecessarily. After all, who wants to work on a trivial project?

Then what do I know? I only work on trivial projects with C++ these days, like writing small performance-critical or low-level components in C++ that can be easily used by more productive (in my opinion) languages like C# or Python.

indranil banerjee

Posts: 38
Nickname: indranil
Registered: Nov, 2004

Re: To new, or not to new, which is the best one? Posted: Apr 20, 2005 2:46 PM
Reply to this message Reply
Google have released TCMalloc as open source recently
http://goog-perftools.sourceforge.net/doc/tcmalloc.html seems to have similar goals as SmartHeap.

Multithreaded/Multiprocessor systems look like a valid case where you may not want to use the default malloc. But I wouldnt want to roll my own.

The Boost memory pool library is another useful allocator if you want to recycle a lot of small objects http://boost.org/libs/pool/doc/index.html

As an application developer these are not the kind of tools I would ever write, but I would certainly use them if the need arose.

Adi Shavit

Posts: 6
Nickname: adish
Registered: Apr, 2005

Re: To new, or not to new, which is the best one? Posted: Apr 21, 2005 1:49 AM
Reply to this message Reply
I write a lot of (real-time) video processing code. This means that after some initializations the same code runs on each input frame.
I find that pre-allocation of work buffers and using amortized-cost containers (e.g. std::vector<>) usually reduce/remove most of my run-time alloation requirements in my applications.
The benefits are that the actual allocation functions' performance have no real impact after the program has been running for a while.
I guess, in a sense, this is a kind of implicit cache/pooling scheme.
In other cases, I have used Loki::SmallObject to see some nice improvements in running time for certain usage patterns (but not for others).

On a differnt note, I think that overloading the delete operator might be a powerful tool (although syntactically imperfect ;-)) for implementing RAII for resources that cannot be released with delete, but still allow them to be used with standard smart pointers, but that's another story altogether <g>.

Adi

Flat View: This topic has 5 replies on 1 page
Topic: On Blogging Previous Topic   Next Topic Topic: To Kids, it's Just a Sickness

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use