I’d like some input on how to keep databases that are running on different servers throughout our organization synchronized.
We have a registration system of sorts, and when user “John Doe” registers on server 1, this registration information needs to be instantly available on server 2 and 3. Currently, our network administrators have set up postgresql replication to do this, and it seems to be working fine.
When this solution is up and running full scale, we will have 2 databases being replicated to potentially 4 or 5 different servers. Two database because the registration information is saved into two different databases. One database is a CRM, the other is a reservation application. (BTW. This is all on a LAN, vs WAN)
What I’m trying to figure out in my head is whether postgresql replication is the best way to do this or if we should be looking at a software solution. I saw one post here where someone was asking about ETL vs. REST API.
REST may be a good option I guess. I’m not sure that it completely suits our needs … I’ll have to think about it some more..
But I’m also thinking about Enterprise integration software … like iway or something like that where you would identify the source database, identify the taraget database… and then define the interval. The tool would then monitor the source database for changes every X minutes. Anytime there is a change, it would update the target databases.
But I’m not sure I’m asking myself all the right questions to do a proper evaluation.
Here are some I thought of so far:
- How fresh does the data need to be on the servers? Has to be ‘fairly’ real-time… I would say within seconds.
- What’s the volume of information? several records could be created in the source database every few seconds.
What else should I be considering?
Thanks.
Ask yourself and your business very carefully, sincerely, and repeatedly, if it really needs to be real-time. As in, really instant? Are we talking about atomic, you can never ever have those two servers out of sync, not even for a microsecond? Or is there really some kind of SLA attached and the data can be stale for maybe a few milliseconds, or a few seconds, or even a few minutes?
The only way you can have atomic replication is to use a broker and massive distributed transactions. This is the BizTalk style architecture, and it is painfully, unbearably slow, brittle as year-old soup crackers, and unbelievably difficult to test.
Commercial replication software like Golden Gate is nearly real-time. It’s also very expensive. But what you get for that high price is active-active replication – something that very few built-in replication tools support – among many other high-end features that are very important if you’re building a high-availability environment (let’s say at least four nines, and not just during business hours).
Enterprise Service Bus, at least the ones that aren’t selling snake oil, are based on messaging and are the complete opposite of real-time. Frameworks like NServiceBus, Apache Camel, MassTransit, and so on, focus primarily on asynchronous messaging, which means eventual consistency. What this means, which is paradoxical to some, is that it’s very fast and generally easier to maintain, but you risk physical or sometimes logical inconsistencies between services. There’s no point in adopting one of these if you don’t have or plan on building a Service-Oriented Architecture in which services all function independently and without the need for any shared data.
I’m not really sure which of these you want. Many programmers and most non-technical managers say they want real-time integration, but when they really stop to analyze the problem and the business, it turns out that the SLA is far more forgiving than that. For one thing, a user’s response time is going to be measured in seconds, so really, who cares if things are inconsistent for a half-second? That only really matters in things like medical equipment or automated trading systems. Moreover, optimistic concurrency has been around forever and it’s usually “good enough” for most applications – in other words, you can allow things to become inconsistent 1% of the time because you know that 99% of the time they’ll be fine, and for that troublesome 1% you can push the decision back to the user to resolve it.
So, think about what you really need and choose accordingly. ESBs and SOAs are great if you have a solid understanding of the business and stakeholders willing to cooperate. They also require a lot of effort in the analysis and design areas, so if you want a pure infrastructure solution, better to go with a commercial replication tool (or, I suppose, an open-source one if you find one that does everything you need). Only if you’re a highly-specialized sector that requires realtime distributed consistency should you even consider a broker-based solution; it’s almost never the right choice.
Whether you choice REST, SOAP, AMQP, Protocol Buffers or some other type of messaging is almost entirely irrelevant to your choice of architecture. Those are just TLAs (well, FLAs) used to describe formats. You can use almost any format with almost any architecture. Worry about the architecture first, then choose a format based on how well-supported it is within that architecture and how familiar your team is with it.
Edit: Since you asked for a list of questions to ask, here are mine:
- How fast does it need to be?
- Does it really need to be real time?
- Does it really really need to be real time?
- Are you sure it needs to be real time?
- What would happen if it isn’t real time?
- What’s the cost or risk associated with whatever those bad things are?
- What’s the minimum interval you would consider to be real time?
- What research is that interval based on?
- How was that research validated?
- Are you really willing to pay 10x as much in development and maintenance in order to have real time consistency?
6