It’s not clear how data consistency is provided for different branches.
We have a large knowledgebase which is filled by the experts and delivered as updates to our clients. This data is also used for testing the product.
But if we have quite a few feature branches (being developed in parallel) and each feature requires its own slightly different db schema won’t it be a hell?
Definitely some way of populating all DBs from one centralized store must exist.
And here comes the question? What are these methods?
Now the only solution I see is to create some mapping mechanism and to save mappings in the corresponding DB, yet I believe more elegant approaches exist.
Update:
To clarify the problem.
We have one (let’s call it master db).
And N databases with slightly ‘narrower’ schemas comparing to the chema of master.
I.e. master is has the richest schema: other dbs have fewer tables/columns.
2
Your database build and schema changes need to be under source control. Instead of shipping a database file with the knowledge base data already in it, have a separate build that will do an import of the knowledge data after the database is built or altered. The source of the data could be from text/xml files.
The import scripts would also be under source control since they have to handle the particular schema for this branch.
This is going to be sort of a hell if you don’t get it under control and automated. If it were easy, everybody would be doing it.
1