[kaffe] KeyEvent -> JDK 1.4 patch (And longer explanation;)

Dalibor Topic robilad at yahoo.com
Mon Oct 7 13:46:40 PDT 2002


Hi Jukka,

--- Jukka Santala <jsantala at morphine.tml.hut.fi>
wrote:
> On Sun, 6 Oct 2002, Dalibor Topic wrote:
> > Different application will need different
> profiles.
> > Encoding profile information in comments is a bad
> > idea, in my opinion, as it leads to a lot of
> > uninformative comments. I'd prefer to see an
> external
> 
> Hm, "Uninformative comments", what about JavaDoc? ;)

I'll try to elaborate. Imagine we used @profile <name>
to comment to which profiles a class belongs. Every
time someone introduces a new profile for kaffe, they
would have to add a lot of comments in a lot of
classes to define the new profile. That would lead to
an explosion of not very informative comments. ;)

That's why I would prefer external profile files.
 
> At any rate, the
> problem I was trying to communicate through is that
> you can't get away
> with class-level division all of the time. As a
> case-in-point example, in
> JDK 1.4 AppletContext got getStreamKeys() method...
> which returns an
> Iterator.
> 
> Unfortunately, AppletContext is in JDK 1.1, but
> Iterator isn't.  If you
> try to implement class-level division for what's
> included in a JDK 1.1
> profile, you either have to leave that method off
> JDK 1.4 profile as well,
> or include most of Collections Framework in JDK 1.1
> profile. Besides of
> which, classes implementing AppletContext will break
> if they don't
> implement getStreamKeys().
> 
> There are a lot of similiar situations, but that's
> one I'm familiar with.

SQL interfaces have also changed in 1.4, breaking
mauve tests, for example. If you check kaffe's sources
in java/sql/ you'll see that I've left the new methods
commented out when I updated the interfaces.

I would like to see:
* an external (possibly XML) profile file
* a parser for those files, which takes a profile file
and a compiled rt.jar and generates a new rt.jar from
it.

That means we'd have to operate on bytecode level.
There are many toolkits for such projects. I believe
that's preferable to writing a java parser and doing a
source-to-source translation on all java library
source code.

If we use an external format, we can add new
granularity options easily as our bytecode ripping
skills increase. :) Start with a package list, add
classes, fields, methods later. We'd eventually arrive
at the tool we need. Japitools might be quite useful
to provide those API descriptions, too.

> > additional benefit of having a couple of small jar
> > files would be decreased memory requirements to
> build
> > kaffe. Currently it takes more than 32 mb to
> compile
> > rt.jar using kjc on i386-linux.
> 
> I don't see any major reasons to object to that
> plan, altough most of
> those jar files would be very small, and thus little
> overall advantage. Do
> we already have java.util as separate jar-file, or
> is it too widely
> depended on? Another large overall package would be

We don't have anything separate except from the
libraries/extensions stuff (that means RMI is
separate). java.utils is used quite a bit in java.lang
and other classes, so I don't think separating it
makes a lot of sense. java.util should be in core
rt.jar.

> java.io, but I think
> practically everything uses that. Breaking awt off

Think about System.out/System.err , which both
reference java.io. IO should go in core rt.jar, too.

> into separate package
> probably gives the largest effect. Overall I don't
> think there are many
> applications where those separate packages are going
> to be overly useful.

There have been people who cut away quite a bit to get
a minimal version of kaffe suitable for their
application. Check
http://www.kaffe.org/pipermail/kaffe/2000-May/006575.html
for a reference. Putting the cut off parts into
separate packages should help people wanting to do the
same.

> > The values for VK_* range from 0x00 to 0xFF in the
> > current kaffe implementation (jdk 1.1) did that
> change
> > in JDK 1.4? I assume that VK_* represent all
> keyCodes
> > one can have in a Java application, did I get that
> > wrong? If the range is only 256 values, then we
> shoudl
> > use an array, in my opinion.
> 
> VK_CUT is 0xFFD1. It should be noted that I doubt
> the >0xFF keys can
> currently be actually generated by the VM/input
> layer. At some point they
> should. Also, Java APIdoc says one should not rely
> on the values of the
> VK_ constants, so it would be perfectly in keeping
> with the spec to remap
> them all to 0xFF range. Overall I think keeping with
> Sun's is best. As
> suggested, we could sue hybrid: If the value is 0xFF
> or less (most common
> case), use array, otherwise check ranges. However,
> in overal analysis, I
> think that function is called too rarely to warrant
> the speed-size
> tradeoff. Altough if we use BitSet (dependency to
> java.util;), it's a
> tough call. I think the present way is simplest and
> safest.

Thanks for clearing that up. I fully agree.
 
> > please don't work around kaffe's bugs in your
> code.
> > Submit patches. That fixes it for everyone.
> 
> Yeah, I do, but since we're aiming at Kaffe
> compatibility, it's bit
> awkward if the patches haven't been incorporated to
> the latest Kaffe
> release. Since our roadmap (And Richard Stallman:)
> currently calls for a
> Kaffe compatible release at the end of the month,
> this is bit tricky
> without tighter Kaffe release schedule or
> work-arounds in XSmiles.

I'd expect RMS to call for a gcj compatible release ;)
 
> Which is why I'm asking for more frequent Kaffe
> releases.

I'll reply to it in a separate mail. Gotta go.

cheers,

dalibor topic

__________________________________________________
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos & More
http://faith.yahoo.com




More information about the kaffe mailing list