[kaffe] What is the rationale for Kaffe's VM memory limits? (The "-mx" and "-ss" switches.)

Dalibor Topic robilad at yahoo.com
Fri Mar 14 05:54:01 PST 2003

hi Mark,

--- Mark J Roberts <mjr at znex.org> wrote:
> I am sick and tired of manually overriding the heap
> size limit in
> order to run Freenet without hitting the arbitrary
> default 64MB
> limit.

I think that's due to a bug in kaffe ('s class
library) as JDK seems to be able to run Freenet in
30M, or so, according to what Greg posted here. I'm
trying to track that one down with Greg, but it would
be helpful if you joined in, as a Freenet developer.

I've looked at the gcstats outputs provided by Greg,
and most of the Objects lying around are either
java.lang.Objects or java.util.HashMap$Entries, with a
bunch of freenet.fs.dir.* Objects following. Grepping
throuh the source indicates that java.lang.Objects are
used by freenet for synchronization, right?

My prime suspect at the moment are the
HashMap.Entries, so I posted a patch to track down
who's creating all these HashMap.Entries, hoping that
would provide some clue where they are used, and why
they don't go away ;)

The idea is to create a stack trace for every
constructed instance of HashMap.Entry and map the
trace to the number of instances created with the same
stack trace. So the HashMap.Entry constructor
increases the counter for its stack trace, or inserts
it into the map.

The patch has (at least) one problem, it relies on
Runtime.exit() to print the map of stack traces.
Apparently it seems to be hard to get freenet to
exit(). So I'm not sure how to proceed from here: if
it is possible to tell a freenet node to shut down
through exit(), that would be helpful, otherwise  I
could add a thread that prints out the map in regular

The other course of action would be to replace kaffe's
HashMap (if that's the culprit) with another
implementation, let's say Classpath's, and see if that
yields any benefits.

> I just don't understand why these options are even
> there, or why
> they are not unlimited by default. It is _breaking_
> applications.

In theory, they are not necessary. In practice, it's
rather unconvenient to have a maximum memory setting
higher that the amount of available RAM. In that case,
the gc might spend a lot of time asking the virtual
memory manager to shuffle pages around, and that
degrades performance severely. The gc kicks in when a
certain percentage of memory is used.

The 64Mb limit is the same that Sun's using on the JDK
1.4.1 for example. See
under -Xmx . So if you're getting problems because
your application is running out of memory on kaffe,
but works fine on Sun's JDK, then that's a bug in
kaffe, in my opinion. Setting -mx to unlimited by
default would just mask it ;)

Additionally, here's a blurb on usefullness of the
java heap setting from the Java Server documentation (

"The heap limit helps determine when a garbage
collection (GC) will take place. You can set the limit
high to avoid a GC, but then you could end up paging.
Generally, we have found that it is better to take the
hit from a GC fairly often and to avoid paging. The
other thing to keep in mind about garbage collection
is that the entire server stops while it is going. So
if you have an occasional, very long GC, the clients
will hang for that time. That leads to more
variability in service quality than more frequent but
smaller GCs."

In short: let's fix the bug ;)

dalibor topic

Do you Yahoo!?
Yahoo! Web Hosting - establish your business online

More information about the kaffe mailing list