How to solve the bottleneck of concurrent operations when using mongo to implement read-write locks
I created a lock table in mongo. The primary key of the table is the resource id, because read and write must be mutually exclusive. The table also has a list field owner, which is used to store which task added the lock. When a task needs to add a read lock to a resource, I will start a mongo transaction, then add or update a document, and put the identity id in the owner.
Now I have a problem. If multiple tasks add a read lock to the same resource at the same time, it will cause multiple mongo transactions to operate on the same document at the same time. There will be many write conflicts that cause the lock to fail. How can I solve the write conflict problem caused by concurrency? At present, I use retry as a temporary solution, but I want to know if there is a better way?