Mon Jun 21 19:03:09 PDT 2004
allocation size distribution, I found that the size distribution
8 16 24 32 40 48 56 88 112 120 144 192 240 496 1000 2016 4040
reduces the slack to about 6.6 per object for javac and 5.6 for the
Clearly, having more allocation sizes requires having more freelists.
The additional worst case fragmentation introduced by those is less than
the number of freelist times the gc_block size.
For the current Kaffe, this would be less than 8 * 4096 or 32K.
By using 19 freelists, this would be less than 19 * 4096 or 76K.
This is the worst case, the average case would be half that,
depending on the distribution.
The savings in slack by using 19 freelists instead of 8 would
be ca. 720,000 bytes for javac and ca. 100,000 for the simulator.
The worst-case additional overhead in freelists would be in both
cases about 44K.
So, I suggest that instead of computing the alloc sizes from some
arbitrarily chosen number of tiles, we start out with a fixed list.
One could also imagine that Kaffe would monitor allocations, and
by itself start new freelists for frequently allocated objects sizes,
depending on the application.
Another idea might be to provide a hook such that Kaffe could at
startup time be parametrized with a size distribution that was obtained
by tracing a particular application and that fits the allocation pattern
of that application well.
For the short term, I suggest the heuristic table with 19 freelists
What do you think?
More information about the kaffe