I have an API, where an insert to tableA
happens after validating the result of a select query on some other tables. So tableA
can have no unique keys other than the auto-generated ID.
Therefore concurrent requests to this API create duplicate entries in tableA
instead of one.
This API’s latency is very high, I have a lot of bulk inserts happening with two other tables (tableB
and tableC
), so I cannot make this transaction serializable since that would lock all three tables.
One solution I could think of is having another table called lock
where I have a unique composite key which I insert after validating the select query. Since repeatable read takes a lock on this key, this avoids duplication in the original API and also does not block the tables A B and C for other requests.
But the problem is, since the transaction is write-heavy, while the concurrent requests with the same unique key may wait on the lock
table they will proceed with the inserts anyway and only fail while committing the transaction.
Is there any other efficient solution to this problem?