I am trying to integrate parallel computing into the stacked learner tuning using the ti()
function from the mlr3tuning package. I have included two levels: level 0 allows tuning of 9 learners, and level 1 calculates the average of the predictions from the 9 learners. I included 9 learners: classif.gausspr, classif.glmnet, classif.rpart, classif.kknn, classif.svm, classif.gbm, classif.xgboost, classif.ranger, classif.nnet
. In the mlr3
book, the example given to illustrate parallel tuning does not apply to stacked learners. Therefore, I am not sure if I am doing things correctly, especially since my script takes a long time to run (currently, I cannot get the results from my script). I have included the main code of my script below.
In case my script is incorrect, I was wondering if this solution would be correct: I was thinking of doing parallel computing as follows:
Step 1: Create a list of learners “learner_list”
Step 2: Perform parallel computing on each learner like this:
cl <- parallel::makeCluster(4, outfile = “Output.txt")
doSNOW::registerDoSNOW(cl)
tuner_output_list <- foreach::foreach(learner_ID = 1:9, .errorhandling = "pass") %dopar% {
## Run the function "ti() " for each learner
run_tuner <- mlr3tuning::ti(tuner = tuner,
learner = learner_list[[learner_ID]],
resampling = resampling,
measure = measure,
terminator = terminator)
}
parallel::stopCluster(cl)
Step 3: Retrieve the results of the tuning from the step 2
Step 4: Apply the function mlr_learners_classif.avg
using the tuning results from the step 2 (but I am not quite sure how to do this at the moment).
Code for stacked learner tuning
inner_resampling <- mlr3::rsmp ("cv", folds = 2)
performance_measures <- c(mlr3::msr("classif.sensitivity"), mlr3::msr("classif.specificity"),
mlr3::msr("classif.acc"), mlr3::msr("classif.auc"))
terminator <- mlr3tuning::trm("evals", n_evals = 100)
tuner <- mlr3tuning:::tnr("mbo")
## Create tuning instance
instance <- mlr3tuning::ti(task = task,
learner = lrn_graph_1,
resampling = inner_resampling,
measures = performance_measures,
terminator = terminator)
## Run the tuning process
tuner$optimize(create_tuning_instance)