I am setting up an ETL process and small data warehouse for querying the data in a few different dimensions. One issue is that the schema for the objects can mutate over time – mainly that some fields will be added and some removed. What are some methodologies or approaches for handling this?
(Note the source data is managed as EAV, which is hard to query and it’s slow – hence the DW approach in the first place).
What seems like a naive approach (which can often be the best) is to simply add columns over time to the destination shapes in the warehouse, and backfill existing objects with some query-valid placeholder data.
This is a little outside my domain expertise though, so looking for input.
3
From the warehouse side I would have a staging table from each (logical and temporal) source, so for all data coming from source 1, there would be a table for source 1-version 1, source 1-version 2 and so on.
You then have a second schema (or database) which is ‘reportable’ and that is filled from the staging table(s). You can add, and not need to remove columns to it as additional data comes online.*
*You might decide to remove columns from it as a maintenance measure if they’re no longer reported on, but again you preserve the data in the staging table for future use
Data warehouses typically can be “refilled” completely from some kind of OLTP database. It is not unusual to do this once per night, asynchronously. So whenever you have a new requirement where you have to make a schema change, one possible strategy is just to define that as a new “version” of your data warehouse, adapt your ETL process to the new version, delete all previous data and refill the DW completely from scratch from your OLTP source again. That makes schema changes very painless, since you can assume a completely empty database during the change. This way, you won’t need any “schema change” scripts, just delete the whole schema and run a complete set of CREATE TABLE
scripts for the newest version of your DW.
What you have to test is if that simple strategy is fast enough for your case and suffers your needs. If your data warehouse is small (whatever that means in your eyes), as you said, you may have no problems to provide different versions of your data warehouse in parallel, so your users can use version “(n-1)” while the ETL process for version n is still running.
1