How can a microservice environment be protected from being polluted by duplicated data?
In a system where data is shared by publishing it to a message broker (e.g., Pubsub, Kafka or RabbitMQ), and consumers save their own copies, a challenge can arise if a producer loses part of its data (e.g., due to a bug or database issue) and then recreates that data. If the data lacks a unique key derived from stable business values, duplicate records may propagate across the system when the data is recreated.
For example, if keys are generated randomly (e.g., using UUID.random()), re-creation of the same logical entity could produce different keys, leading to duplicate data downstream.
Is this just a downside of splitting a monolithic database, where consistency is easier to maintain centrally. Or is this something you would try to avoid?
If you know of best practices or ways of fixing this then please share.
How can a microservice environment be protected from being polluted by duplicated data?
There needs to be a definition of what “unique” is.
Ideally this would be something that was universal and consistent across the entire solution, like a stable and persistent identifier.
If you do not have one, and are in a position to establish such an ID, then you should consider doing so as it might make things easier in the long run.
Logically, if the data being produced cannot be uniquely identified, then I don’t see how anyone else downstream can work that out… as per the old adage: “crap in, crap out”.
Composite Key
Another way of determining uniqueness is to take a combination of properties and treat them as a kind of composite key. This approach is sometimes used in IDAM solutions where user identities come in from different sources and you need to match a combination of values (e.g. first name, last name, dob, etc) to determine of you already have that person in the system but from a different context. For context, you can get this in a university setting where someone can enroll as a Student and later join as Staff – or visa-versa – and where students and staff are managed through separate systems of record.
Stability
a challenge can arise if a producer loses part of its data (e.g., due to a bug or database issue) and then recreates that data.
In large scale / multi-application systems there’s a generally reliance on the data in Systems of Record being stable (System of Record: SOR – systems that master specific types of data / records, and/or are the authoritative source).
Downstream applications (and the teams who design/deliver/support them) generally assume a minimum level of competency and stability in such systems.
E.g. If I’m in a government department and architecting an application that will consume some-kind of record from a SOR, I’m unlikely to worry about the authoritative ID’s changing, because it’s massively in the interests of the wider organisation for them to be stable, and extensive precautions (back-ups, etc) are taken. Additionally, not only are the chances of the ID’s changing astronomically low, the effort involved in architecting a solution that would compensate for that would be a major undertaking.
Is this just a downside of splitting a monolithic database, where consistency is easier to maintain centrally. Or is this something you would try to avoid?
If you can maintain record uniqueness in a system implemented as a monolith, then I’m not clear how you could not achieve the same thing using a different architectural style – although it may depend on the specifics of the situation.
There’s all kinds of situation where you might have data coming in from some source you don’t control, where you need/want to have uniqueness – it all comes back to how uniqueness is defined. If the logical process makes this impossible then there’s not a lot you can do.
The best advice I can give you is to whiteboard it out before you start coding. There may be some design / architectural patterns that can help, but I don’t know enough about your specific situation to point you at any at the moment.
Your domain event would notify listeners about a change in the state of an entity managed by the microservice, say, customer entity. Entities are identified by their ID, and not their attributes. So, suppose the domain event is “email changed”, it will consist of customer_id and their updated email. The listeners are modeling the same entity. So, they will update their representation of this entity, and not create a new entity. So, event processing won’t lead to any duplicated data.