We have a Azure Databricks workspace and Dev/QA/Prod environments. Everytime the Data engineers have to ship the artifacts from nonprod -> prod (e.g. python notebooks, config modules, etc) they have to copy the artifacts manually over to the next environment and change the paths in the config files to work correctly in the new environment.
Its a hassle and they see an opportunity to reduce toil here by leveraging Azure Devops pipelines.
How do we create a pipeline around this to migrate these artifacts? what is the best practice? We’ve considered for example Azure Keyvault for storing variable specific environments and generalizing the config paths so that they are replaced dynamically during pipeline runtime. But is that really the best way? Is there a better way, maybe new method now to do this even easier? Things like that is what Im trying to understand here to do it best way possible in 2024.
Heres a sample databricks workspace structure of the current setup as well as an example config file with hardcoded paths that always have to change for new environment config.
Databricks Workspace structure:
Workspace
->Shared
–>Demo
–>Metrics Engine
–>modules
—>_resources
—>test
—>example.ipynb
—>mod_config.ipynb
—>mod-schema.ipynb
mod_config.ipynb
has some hardcoded paths like this (they start with ‘abfss’):
config = {
ConfigurationKeys.ROOT_PATH + Constants.FileFormats.CSV: ConfigEntry(None, 'abfss://companyxyzanalyticsdev@companyxyzdatageneraldev.dfs.core.windows.net/source/companyxyz/extracts/sampledb'),
ConfigurationKeys.ROOT_PATH + Constants.FileFormats.PARQUET: ConfigEntry(None, 'abfss://companyxyzanalyticsdev@companyxyzdatageneraldev.dfs.core.windows.net/sink/tables/parquet/'),
ConfigurationKeys.OUTPUT_PATH: ConfigEntry(None, 'abfss://[email protected]/data-projects/internal/data-regression-analysis/resultset'),
ConfigurationKeys.RELATIVE_PATH_DATALAKE_TABLES_TRANSACTIONS: ConfigEntry(Constants.FileFormats.CSV, DataLake.RelativePaths.SourceTables.Transactions),
ConfigurationKeys.RELATIVE_PATH_DATALAKE_TABLES_REGIONS: ConfigEntry(Constants.FileFormats.CSV, DataLake.RelativePaths.SourceTables.Regions),
.............
So ideally at pipeline runtime, the paths would be changed to QA ones and Prod ultimately, cause right now as you can see theyre dev specific and normally they would have to be manually updated after copying them manually to other environments, which as mentioned is a hassle! so maybe if theres a transformation that would be possible to be done in the notebooks, or if the paths have to become generalized and the specific have to be stored in keyvault, etc. whatever the best approach would be.
FYI, the databricks repo is hosted in bitbucket (though could be changed if necessary to Azure Repos and makes things easier, but Devs are very used to bitbucket)
Also I have checked out this thread for example but it uses terraform to try update parameters. We dont use terraform…however maybe if we have to resort to IaC afterall, we can probably integrate Bicep if needed.
I also came across this repo example though it uses github integration and we dont want to use github. Also not sure if its implemented with best practices for 2024