A project I’m involved in has suffered a change in scope, and before I set about trying to cook up some homegrown solution, I’m wondering if there is something out there — some framework, for example — that will spare me from having to design and debug my own code. Let me try to explain the details as simply as possible.
Original project
This is a data migration project, an ETL. Originally, there were multiple source databases, multiple ETL engines (allowing for failover), and 1 single data warehouse database. The data warehouse was going to keep the data from the individual sources straight, and it was going to be replicated, behind the scenes: meaning, my ETL would only have to worry about writing to the 1 data warehouse. I had a plan for that.
The project’s change
Now, the customer is worried about mixing the individual source data into a single data warehouse. They want separate data warehouses. This would be simple enough, but for the fact that they still want the ETL engines working per the original agreement. Let me explain that.
The project’s requirements (in a nutshell)
Let’s imagine the following:
- 4 source databases
- 4 ETL engines
- 4 data warehouses (which may each be on a separate server)
Given the above, the ETL engines should be able to work round-robin, a single ETL engine pulling from any of the 4 source databases and writing to the appropriate data warehouse. If 1 or more source databases goes down, or 1 or more ETL engines goes down, or 1 or more data warehouses goes down, the ETL process should still continue, merrily along, performing ETL where it can be done.
My problem
If there were 1 data warehouse, I could coordinate this; if the ETL engines had only a single data source and data warehouse pair they were assigned to, I could handle this; but now things have gotten complicated. I’m really not up on the fancier frameworks — or even, perhaps the concepts — that handle something like this. Perhaps there is a name to a scenario like this (and it’s a well-known problem), but I don’t even know the name.
Technologies used
Note: We already have a working prototype, delivered and tested by the customer, that performs the ETL on 1 source and 1 destination. Here is what we are using:
- Jython (Python, running on the JVM) for the ETL
- Microsoft SQL Server for the source databases
- MySQL for the data warehouse databases
My question (again)
Is there some kind of framework that coordinates a process like this, where the ETL engines can service all the source-destination pairs, provide failover, and yet won’t be stepping on one another’s toes, or is this something I have to code up myself.
In closing, I hope the above is clear. If I can do anything to clarify the above, please ask. Thanks.
2
It seems as though you want to keep your existing etl logic pretty much the same but need some new process to divert the data in a more dynamic way.
Some type of software agent may act as a bridge between the transformation layer and db load layer, providing the extra functionality you need.
I am sure something like this exists and what I would do is contact a few of the big etl and database vendors and see what they have to offer. Personally, I would code my own bridge (call it a migration agent) and keep the code changes in your existing etl modules to an absolute minimum if possible