Problem with StringBuffer

Mo DeJong mdejong at cygnus.com
Sun Apr 2 23:14:57 PDT 2000


On Sun, 2 Apr 2000, Wolfgang Muees wrote:

> 
> Am Sam, 01 Apr 2000 schrieb Tatu Saloranta:
> > Wolfgang Muees wrote:
> > > 
> > > The right solution for this problem is IMO: don't reuse StringBuffer.
> > > It is designed primary as an input buffer for a single string.
> > 
> > I think I disagree here; at least if there's no other class for similar
> > purpose. I am interested in optimizing Java-programs, and in general,
> > one of the most efficient optimizations is to recycle objects. Object
> > creation is not a cheap operation. Especially in this case, where
> > StringBuffer does allocate a character array, it means there are at
> > least 2 memory allocations and other initialization code. If the
> > array truncation can be done when the array is being copied (during
> > destringify() or whatever the method was), it won't add a new array
> > allocation.
> > 
> Tatu, I think you miss an important point here:
> 
> - most JAVA programmers try to code a program that behaves well
>    under all JVMs available.
> - The default StringBuffer implementation from SUN have problems
>    with resuse of large Stringbuffers.
> 
> So, IMO all you can do is to code around this problem.
> 
> best regards
> Wolfgang


This brings up an interesting question. Should kaffe always
maintain "compatibility" with a Sun JDK implementation
(1.1, 1.2, or 1.3) even when a Sun implementation
is clearly wrong or inefficient?

I have to wonder when we are going to give up on waiting
for Sun to fix bugs in the core libraries and just make
sure Kaffe is reasonable.

My "pet peeve" API in the java.util.zip package. You
can check out the jar implementation I wrote for Kaffe to
see an example of the problem with the ZipEntry
implementation. In short, the Sun zip impl forces the user
to generate a CRC checksum on the data written to the zip
file even though the zip library already does this for you.

Here is a quick example of what is currently required for
the Sun impl (and mirrored in the Kaffe impl). Also note
that this only applies to uncompressed zip entries
(it is yet another one of the mysteries of the Sun impl).


InputStream in = new FileInputStream(entryfile);

ZipEntry ze = new ZipEntry(entryname);

ze.setMethod(ZipEntry.STORED);

ze.setCrc( 0 );

crc = new CRC32();

in = new CheckedInputStream(in,crc);

readwriteStreams(in, zos); // this just writes all the data to in

ze.setCrc(crc.getValue());

zos.closeEntry(); // this closes the current entry on the ZipOutputStream



Why on earth should I need to do this? Code like the following
should run faster because a second CRC calculation over the entire
stream would not be needed. This code would not run on the Sun
JDK because it raises an exception in the closeEntry() method, but
we could fix the problem in Kaffe.

InputStream in = new FileInputStream(entryfile);

ZipEntry ze = new ZipEntry(entryname);

ze.setMethod(ZipEntry.STORED);

readwriteStreams(in, zos); // this just writes all the data to in

zos.closeEntry(); // this closes the current entry on the ZipOutputStream


Any comments?

Mo DeJong
Red Hat Inc.



More information about the kaffe mailing list