I have a model trained that uses a lightgbm classifier with a sklearn calibration on top.
I am tracking the entire training process via mlflow as well as registering the final model to then be loaded in a deployment.
from lightgbm import LGBMClassifier as lgbmc
from sklearn.calibration import CalibratedClassifierCV
mlflow.start_run()
mlflow.<flavor>.autolog()
model = CalibratedClassifierCV(lgbmc(learning_rate=args.learning_rate, n_estimators=args.n_estimators, subsample=sr, class_weight='balanced', num_leaves=args.num_leaves), cv=args.cv_folds, n_jobs=-1).fit(x_train, y_train)
mlflow.<flavor>.save_model(
lgb_model=model,
path=args.model
)
mlflow.<flavor>.log_model(
lgb_model=model,
registered_model_name=args.registered_model_name,
artifact_path=args.registered_model_name,
)
mlflow.end_run()
Both flavors seem to work in this case but which flavor should be used in such multi-flavor ensembles or stacked models?
New contributor
Christian Beckers is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.