[kaffe] Lock patch
guilhem at kaffe.org
Tue Mar 8 09:53:15 PST 2005
Helmer Krämer wrote:
> Guilhem Lavaux <guilhem at kaffe.org> wrote:
>>I've just finished a functional patch for the locking system. I don't
>>know yet how much slower it is. But I've already had some interesting
>>results with some private app which was not working previously. If noone
>>is against I will commit it on wednesday...
> If nobody owns the lock, the jthread_t of the current thread is stored in
> *lkp (afterwards called thin lock). If the same thread tries to lock the
> mutex again, or another thread tries to do so, a heavy lock will be allocated.
> If *lkp does not point to a heavy lock, a new one is allocated on the heap.
> To preserve the state of the lock when a thin lock is replaced by a heavy
> lock, new->holder is initialized with the current value of lk, and lockCount
> is set if necessary. Afterwards an attempt is made to install the heavy lock.
> If that doesn't work (either because the lock was freed or because another
> thread already installed a heavy lock), the heavy lock is freed and we start
> again with checking whether *lkp is a heavy lock. Once we got a heavy lock,
> we wait for the heavy lock to become usable by the current thread, just like
> your code does.
> What do you think of this?
It is interesting as it puts back some quickness into
locks_internal_lockMutex. However you are missing static heavy locks in
this implementation. So maybe we should have a way to prepare those
locks or we'll have to rely once again on my crappy implementation using
the special 'lock_in_progress' iLock (because you have to protect the
concurrent initialization of this heavylock). We cannot use malloc in
getHeavyLock for static locks as we may use them in KaffeGC_malloc or
KaffeGC_free (and imagine the horror ! ;) ).
So I propose a 'initStaticLock' function which will call KSEM(init)
basically and put the heavyLock pointer in 'iStaticLock.lk'.
More information about the kaffe