I’m designing a custom web-based inventory management and workflow system for a client of mine. One of their remote sites has fairly sketchy internet access, and needs to have no (or very little) interruption of business when their connection to the universe goes down.
I’m picturing some sort of local installation of the web app with MySQL replication keeping their local DB up to date. When their connection out of the building fails, they can manually kick over to the local URL, which is configured to hit their local MySQL, and they stay in business.
Question is, replicating those changes back up to the real master. I know of such a thing as master-master replication, but will that really hot-sync divergent tables back together when the connection comes back up?
It’s some help that the remote site’s use case is fairly unique, and it’s likely (though not guaranteed) to write only to tables related to their wing of the business. I could perhaps limit the “failover mode” local application to only those pieces of the app that are unique to their location, which would be enough to keep them in business.
How would you approach this?
I would probably set up multi-master replication and always have the remote site run locally. That way there’s no failover to deal with, the users will never get confused, and you’ve improved their latency. Be sure your system retains the replication logs for much longer than the system may ever get disconnected. Also have systems and processes in place to monitor the status.
Another option is to have the remote site run locally on an independent database and have your own process to sync data. Since your system is custom it’s more work but also more streamlined and able to dynamically handle any situation (e.g. logical conflicts in the data). This solution would actually be necessary if the data being synced needs logical checks. You’d probably be best using GUIDs for IDs or your own system to avoid primary key conflicts.