I have a POST API, where you can save some data for a customer, there is a possibility that somebody can programatically initiates multiple calls with same data in milli/micro seconds time.
Now As i accepeted the first request and processing which includes validations (db reads) and then finally saves (db insert with transaction). But if the insert takes a little bit of time to finish, the other requests pass validation for duplicate entries(db reads) and then once the first request finishes 2nd request waits at the insert as first gets lock and then both times data gets inserted.
can’t just simply add restriction on table for unique data (consider a denormalized db design) and the db reads are not happening with read uncommited and involves multiple tables
I feel like creating a unique key for a customer when the requests comes in and delete the key after request process finishes, untill then customer cannot access that resource, but it may have some drawbacks.
Looking for some solutions how can these situations be handled(imagine a case for 100s of apis, i cannot simply write all validations in db inserts transaction to rollback).