I’d like to use an Oracle database for a new Spring Cloud Data Flow install, but it doesn’t come with ojdbc, and I’m a little confused on the best way to add custom JDBC drivers.
According to the documentation:
To use any other database you need to put the corresponding JDBC driver jar on the classpath of the server as described here.
The “here” link is broken, but I was able to get it working by downloading the code, updating some of the pom.xml files, and rebuilding (I inferred the steps from the Skipper documentation, which has similar wording, and a working link)
But this feels a little clunky. When I want to update to a new version of SCDF, I’ll need to redownload the source and reapply my same pom.xml changes. It made me wonder if there is a better way, or if this is really the right approach.
So I kept looking and found this sample, which instead creates a custom build using @EnableDataFlowServer
. That seems pretty tempting, but the sample is several years old, and I can’t find any other documentation encouraging this.
Does anyone have any clarity on what the right approach is?
In case it matters, I’m using Spring Cloud Data Flow deployed to Kubernetes via the new Bitnami helm charts.
To add customer JDBC drivers for Spring Cloud Data Flow your should follow the instructions for extending the classpath for both spring-cloud-dataflow-server
, spring-cloud-skipper-server
and spring-cloud-dataflow-composed-task-runner
jar files.
If you are deploying containers then follow the next paragraph in the documentation to create container images using Paketo.
You will have to publish your containers to a private registry and then update the image values in the Helm values file to point at the new locations.