Newbie user here, starting to use yugabyteDB with some transactional queries.
Regarding optimistic and pessimistic locks, could somebody please clarify what is supported by yugabyte?
Looking at https://docs.yugabyte.com/preview/architecture/transactions/explicit-locking/ at the top:
YugabyteDB supports explicit locking. The transactions layer of YugabyteDB supports both optimistic and pessimistic locks.
then a bit below, under Pessimistic concurrency control section states:
Note
YugabyteDB currently supports optimistic concurrency control, with pessimistic concurrency control being worked on actively.
which confuses me a bit.Some questions:
Do pessimistic locks and pessimistic concurrency control refer to the same thing?
If so, are pessimistic locks supported? The two pasted texts above seem to contradict (if not, can somebody please explain?)
If they are not the same thing, can somebody please provide a bit more details on their differences?
Let me explain a bit more my scenario:
It’s a rather basic select-then update where multiple clients make the same operation and they should not be resulting in the same row being updated.
For example:
select id from entries where owner is null limit 1 for update skip locked (select 1 entry without assigned owner)
update entries set owner = '$1' where id = $2 (update that selected entry so now client_id is set as owner)
I’ve been playing with various transaction isolation levels and with/without for update but so far I have not been able to have clients blocked/wait because another client is doing the select then update.
I can have clients retry and in fact this seems to work well. Using for update results in less retrying but still some. In any case, the client needs to be aware that it may need to retry.
So, all in all, is there any case in which yugabyte can make clients wait without clients needing to handle retries for these types of operations? (my understanding is this would be pessimistic locking, but I may be wrong)
I’m testing this using latest yb image in docker, using a single container but ~100 concurrent clients.
I agree that this is not clear. Two reasons: there is still ongoing work, and the terms “pessimistic” and “optimistic” can be used for different things. We will put more precise terms when this will be fully implemented.
-
We have optimistic semantics where explicit locks and intents (write or read depending on the isolation level) can raise a serializable error on one of the two transactions (chosen from random priority) when conflict when detected. A better technical term would cloud be “wound or die”, or simply “one session fail on conflict” because “optimistic concurrency control” is sometimes used when detection occurs at commit time only but we can detect it earlier
-
To implement Read Committed isolation level, we have implemented a pessimistic lock behavior where the conflicting transaction waits. This looks like pessimistic locking but is actually implemented with exponential retries (transparent to the user because in Read Committed we can retry the statement to run at a more recent read point). This is phase 2 of https://github.com/yugabyte/yugabyte-db/issues/5683 and the current one
-
The final implementation will be based on wait queue and deadlock detection so that its performance is better than retries and it will be available in all isolation levels. This will probably be the best choice for most cases
Another thing in the terminology, we call “explicit locks” those that you take with lock or select for share/update and “intents” the implicit ones on reads and writes, but they are actually all the same: lock information in provisional records during transaction operations
Your scenario should work if you start the cluster nodes with yb_enable_read_committed_isolation=true
flag for tservers. It is not the default (for backward compatibility as was introduced later) https://dev.to/yugabyte/how-to-set-read-committed-in-yugabytedb-2m6c
Without this setting it behaves as Repeatable Reads where pessimistic locking is not yet there and needs application side retry.
An example of using skip locked (with a way to be fully scalable by reading different points if needed) : https://dev.to/yugabyte/scalable-job-queue-in-sql-yugabytedb-4ma5