This post originated from an RSS feed registered with Java Buzz
by Joe Shelby.
Original Post: Things to know about Memory and RMI (&EJB)
Feed Title: Joe's Java Jottings
Feed URL: http://www.blog-city.com/bc/
Feed Description: Notes, observations, and occasional other stuff on Java, with concentrations on Swing, XML, and the Semantic (Object) Web.
You need as much memory not just for the objects on the server. You need enough to hold that, plus the serialized version of it, plus the byte-arrays being used to transfer it through the ByteArrayOutputStream to the client. That's a LOT of memory sometimes, particularly when your object is rather large.
So even though your application (or Application Server) doesn't really need all that much memory to run in general, its going to spike when large objects are accessed by clients in lump sums.
2 ways to handle it.
1) break up the object into smaller chunks (which is good for user feedback anyways: since RMI hangs the thread that's downloading the object, you can't report status to the user; in smaller chunks (and an earlier message that says how many chunks will be needed) you can give percentage-based progress-bar status reports). [A varient on this is to get rid of the beans and go to a different data access model, but the effect is the same: you've broken up the call into smaller calls.]
2) up your memory. For example, JBoss by default still only uses 64meg, the Sun Java default value. There's a line in bin/run.bat to up it, but its not turned on in a clean installation.
It would be nice if Java could detect that a memory spike is happening because of a very specific type of situation and allocate the memory anyways, knowing it will be released and freed very quickly (as the RMI serialization case does).
oh well.