I’m working on a project that has microservice based architecture.
Recently we started facing problems where a single user requests requires updating multiple microservices each having it’s own dedicated database and the response needs be sent to the user only after all the microservices have updates their databases.
I understand that there exists solutions like 2 phase commit, but it kind of seems like an anti-pattern and seems to have performance related issues and it would further complicate the system.
My question is whether it’s even a valid ask of having strong data consistency in a microservice architecture? Or is it a bad design of service boundaries that we have to update multiple databases for a single user request.
I understand that this post doesn’t include any low level exact details but wanted to know the opinion of community on this.
Edit:
Let’s consider a social media platform where a user can add a post. By default that post is only shared with the people that follow the author. But the author of the post can optionally add few people and the post will be shared with those people as well.
As per our current design, post’s data is stored in a separate microservice and data about who can see which posts is stored in a separate microservice. Both the data stores are relational.
Now, we want to have the ability for the post author to add other people while creating the post itself (instead of first creating the post and then adding people who can view that posts) and this should be an atomic operation. The problem with having this as an atomic operation is it requires updating two different microservice’s datastore with no easy way to guarantee ACID constraints.
9