Pessimistic locking

From: Gili (cowwo..bs.darktech.org)
Date: Sun Aug 28 2005 - 23:46:01 EDT

  • Next message: Gili: "Extending the cache functionality"

    Hi,

            I'm not a big fan of pessimistic locking either but I'm using the
    caching algorithm discussed here:
    http://www.usenix.org/events/usits99/full_papers/bahn/bahn_html/ and it
    requires pessimistic locking on its tables such that at most one client
    may access the cache at a time.

            I'm not sure how pessimistic locking works in Hibernate either but here
    is how I pictured it working in my mind:

            CacheEntry entry; // preinitialized
            CacheManager manager; // preinitialized
            List listOfObjects = new ArrayList();
            listOfObjects.add(entry);
            listOfObjects.add(manager);
            
            dataContext.lock(listOfObjects, READ_WRITE_LOCK);
            try
            {
                    // operate on entry, manager objects
            }
            finally
            {
                    dataContext.unlock(listOfObjects);
            }
            // continue rest of session

            Two points here...

    1) We should be able to lock and unlock anytime -- i.e. we shouldn't be
    forced to only release locks at the end of a dataContext.
    2) If the user forgets to release a lock by the end of dataContext,
    Cayenne will release it for him
    3) Abstract away DB-specific locking paradigms (because not all DBs
    provide the same kind of locking) and map it to the most appropriate
    locking in the DB. My experience is weak in this area; I'm not sure what
    kinds of locks exists but we can spy on what Hibernate does. They
    basically have the equivilent of: READ, READ_WRITE, READ_WRITE_NOWAIT
    and maybe more but I don't understand the rest.
    4) The reason I think we should require the user to pass in a list of
    objects versus a single one is so that we can hopefully do some sort of
    deadlock checking on his behalf. If there is insufficient information
    for us to check for deadlocks then we should just let the user lock on
    one object at a time.

            I'm a bit confused about something you said though: why do we open a
    new connection per select or commit operation? I was under the
    impression that a dataContext is associated with a single database
    connection. What's the benefit of the current approach?

            Anyway... please let me know what you think.

    Thank you,
    Gili

    Andrus Adamchik wrote:
    >
    > On Aug 27, 2005, at 3:41 PM, Gili wrote:
    >
    >> Could you possibly help me add pessimistic locking support to
    >> Cayenne? I've got a use-case that requires it.
    >>
    >> I'm hoping to add both pessimistic locking and streaming BLOBs to
    >> the next release but I am new to Cayenne so I don't know where
    >> exactly to start.
    >
    >
    > Hi Gili,
    >
    > You've already found ExtendedType API. This is a place to start doing
    > streaming BLOBs.
    >
    > I am not a big fun of pessimistic locking (possibility of deadlocks,
    > lowered overall throughput and inefficient resource use...) and as a
    > result don't have much experience with it. So I haven't given it much
    > thought and won't be able to provide a good advice without doing some
    > research, including cross-db issues.
    >
    > However I am all for including this feature for completeness. It can
    > definitely be useful. So if you start on the implementation, I'd
    > suggest you to subscribe to cayenne-devel and post your questions
    > there. Time permitting, myself and, I am sure, others would be able to
    > help with integration.
    >
    > One piece of it is already in place - attributes and relationships can
    > be tagged as a part of a lock. This is currently used for optimistic
    > locking and can be extended to pessimistic locking (though AFAIK not
    > all db allow column-level locking ... this will have to be abstracted
    > through DbAdapter SQLActions).
    >
    > Not sure about the flow, i.e. how the lock is created... Probably some
    > method at the DataContext level (DataContext.lock()) that would result
    > in all select queries to be executed with "FOR UPDATE" clause until the
    > next commit or rollback.
    >
    > Also if I understand correctly, JDBC Connection that created a lock
    > must be the one to unlock it (is that so??) This would be a big change
    > to the current model of a "disconnected" DataContext, when generally
    > speaking each select or commit operation gets its own Connection from
    > the pool. We'll have to open a whole can of worms with an open
    > connection tied to an execution thread. Or maybe this is not as bad as
    > I think it is...
    >
    > Andrus
    >
    >

    -- 
    http://www.desktopbeautifier.com/
    



    This archive was generated by hypermail 2.0.0 : Sun Aug 28 2005 - 23:46:01 EDT