JavaLobby and JavaOS

Vijay Saraswat vj at
Mon Oct 13 04:17:31 PDT 1997

   X-Authentication-Warning: lp set sender to owner-kaffe at using -f
   From: Sean McDirmid <mcdirmid at>
   Sender: owner-kaffe at

   On Sat, 11 Oct 1997, Fletcher E  Kittredge wrote:

   > As an ex-Mach and C hacker, let me say that there are real good
   > technical reasons that Mach and Hurd failed.  The micro-kernel
   > architecture is dog slow; adding Java to the mix is not going to speed
   > it up.
   > Before wasting good Java developer time on this, please have someone
   > you trust go back and read all the papers from the late 80's, early
   > 90's which detail the significant performance problems with a
   > micro-kernel architecture...

   Now hold on here, if I understand it, the micro-kernal architecture was
   slow b/c of the way it kept its form of modularity, through runtime
   barriers and abstractions.  The great thing about Java code is that it is
   safe and this safety can be shown (lacking in kaffe but I've written a
   verifier for kimera -  Several good performing
   (and commercial) operating systems have been "derived" from
   micro-kernal's, such as NT, NextStep and perhaps the upcoming BeOS. How
   about a derived micro-kernal that takes Java code, compiles it, and places
   it close to the micro-kernal as a trusted extension?  Now we don't have a
   micro-kernal anymore but you might be able to get the performance you are
   looking for. You still get the safety and abstraction that the original
   micro-kernal provided (through the virtual machine compiled to machine
   code interface).

I agree with Sean here. The notion of a strongly-typed,
dynamically linked language, like Java, makes some fresh
approaches possible. 

To go a step beyond what Sean is saying: in fact it is possible
(the "declarative bytecode verification" approach) to develop
bytecode verifiers *systematically*, using a first-principles
approach, from a constraint-based specification of the opcodes
involved. One can translate an input byte code program, P,
compositionally (byte-code for byte-code) into a constraint
program, T(P) which "obviously" states the constraints on the
run-time type-state of the Java Virtual Machine, which must hold
for program execution to be type-safe. T(P) is "loop-free" and
will take time proportional to its length, which is proportional
to the length of P, to execute. If T(P) does not deadlock or
produce "false", P is type-safe.

This approach differs slightly from the one that Sean and
Javasoft have implemented. In their cases, there are separate
*byte code verifiers*, usually largish (10-15 page) C programs,
implementing a "data-flow" analysis, documented by 10-15 pages of
informal English (I have not seen Sean's documentation yet),
about whose correctness it is hard to say anything formal. 

In any case, the bottom line for this list should be, I think,
that one can assume that the subproblem of verifying the
type-safety of JVM-like bytecode is solved, with of course, one
important caveat: native code. (A lot of the "core" classes in
Java in fact contain natively implemented methods, and one has to
use separate techniques for verifying them.) 

I would not be too surprised if this approach (Java's approach)
of doing "link-time" (i.e. first-instruction-execution-time)
type-checking actually resulted in significant performance
*improvements* over standard OS's. One thing to keep in mind is
that Java's architecture is such that the bytecode verifier can
in fact accept some input code *even if it is not type-safe in
isolation* --- as long as it is type-safe when combined with the
other classes that are already loaded. That is, there are some
possibilities of doing inter-classfile optimizations (e.g.
eliminating some classcasts generated by the compiler) in a
typesafe way, that can produce even more performance benefits
than are posible if one does "straight" Java verification, as
done currently by Sun's Java systems.


More information about the kaffe mailing list