In a mchine lerning data pre-processing pipeline, the pipeline steps are normally serialised or saved as pickles or as layers in a model so they can be loaded again later for serving or predicting thereby preservving the transform / fit parameters of each step derived from the original training data.
Is there a python library or approach that instead allows the parameters or attribute values of processing setps to be returned as data, thereby being able to be saved for example in a database without needing to save the entire object?
This would then allow the processing steps for serving / predicitng to be newly created at predict time and configured using the saved parameters and attribute values loaded from the database.
I cannot find any python library or wrapper of exisitng libraries (eg tensorflow, scikit-learn, pytorch, etc) that provides an api for pipeline step paramater saving and setting.
Does one exist?
If not, why not?
It seems to me that it would be useful for portability and abstracting away from the implementation library, and for ispection and debugging of processing steps.
I have looked at sklearn get_attrs and set_attrs but they become very complex especially where nested transforms are used and need to be tree-walked.
Am I missing a fundamental concept or something?