Is there any design patterns (or best practices) for implementing a geographically distributed system (mostly a database)?
Description: There is a network of warehouses and a central office. Now I want every warehouse replicates it’s data to the central office and the central office replicates just that portion of data related to that warehouse (when it’s modified). This I can call a “filtered replication”. Our database here is SQL Server 2008 R2. Should I go with another database? How about NoSQL databases?
This is a .NET based solution.
So far I have learnt about Web Synchronization for Merge Replication and I am investigating it; but I did not learnt how to implement filtered replication yet. I am not sure how NoSQL fits for an e-commerce problem (I think I need to use a combination of NoSQL+RDBMS if I should go that way) but I am investigating RavenDB and MongoDB.
Any insight would help a lot; Thanks;
2
Any research on “The CAP Theorem” will give really good insight. In general, it outlines the considerations for distributed file systems, and in researching any of those, you will find numerous solutions that meet your particular requirements..
2