This Vignette is supposed to give you an introduction on how to use mlrMBO for hyperparameter tuning in the context of machine learning using the mlr package.


For the purpose of hyperparameter tuning, we will use the mlr package. mlr provides a framework for machine learning in R that comes with a broad range of machine learning functionalities and is easily extendable. One possible approach is to use mlr to train a learner and evaluate its performance for a given hyperparameter configuration in the objective function. Alternatively, we can access mlrMBO’s model-based optimization directly using mlr’s tuning functionalities. This yields the benefit of integrating hyperparameter tuning with model-based optimization into your machine learning experiments without any overhead.


First, we load the required packages. Next, we configure mlr to suppress the learner output to improve output readability. Additionally, we define a global variable giving the number of tuning iterations. Note that this number is set (very) low to reduce runtime.


configureMlr(on.learner.warning = "quiet", show.learner.output = FALSE)

iters = 5

1 Custom objective function to evaluate performance

As an example, we tune the cost and the gamma parameter of a rbf-SVM on the Iris data. First, we define the parameter set. Note that the transformations added in the trafo argument mean, that we tune the parameters on a logarithmic scale.

Next, we define the objective function. First, we define a learner and set its hyperparameters by using makeLearner. To evaluate its performance we use the resample function which automatically takes care of fitting the model and evaluating it on a test set. In this example, resampling is done using 3-fold cross-validation, by passing the ResampleDesc object cv3, that comes predefined with mlr, as an argument to resample. The measure to be optimized can be specified (e.g by passing measures = ber, for the balanced error rate), however mlr has a default for each task type. For classification the mmce(Mean misclassification rate) is the default. Like in this example, we set minimize = TRUE and has.simple.signature = FALSE. Note that the iris.task is provided automatically when loading mlr.

Now we create a default MBOControl object and tune the rbf-SVM.

ctrl = makeMBOControl()
ctrl = setMBOControlTermination(ctrl, iters = iters)

res = mbo(svm, control = ctrl, show.info = FALSE)
## Recommended parameters:
## cost=9.14; gamma=-5.12
## Objective: y = 0.040
## Optimization path
## 8 + 5 entries in total, displaying last 10 (or less):
##          cost       gamma          y dob eol error.message exec.time
## 4    1.980396   7.0965714 0.64666667   0  NA          <NA>     0.115
## 5    9.567881  -2.0376947 0.09333333   0  NA          <NA>     0.110
## 6  -11.411875   9.2000031 0.78666667   0  NA          <NA>     0.112
## 7   13.790890  13.3973753 0.73333333   0  NA          <NA>     0.122
## 8  -10.864032 -11.6256564 0.70666667   0  NA          <NA>     0.115
## 9   -7.169453  -3.2438288 0.72000000   1  NA          <NA>     0.112
## 10   4.873466  -1.5862594 0.05333333   2  NA          <NA>     0.109
## 11   6.940316  -3.1489450 0.06666667   3  NA          <NA>     0.111
## 12   4.559894   0.3454336 0.06000000   4  NA          <NA>     0.110
## 13   9.140613  -5.1243985 0.04000000   5  NA          <NA>     0.108
##             cb error.model train.time  prop.type propose.time         se
## 4           NA        <NA>         NA initdesign           NA         NA
## 5           NA        <NA>         NA initdesign           NA         NA
## 6           NA        <NA>         NA initdesign           NA         NA
## 7           NA        <NA>         NA initdesign           NA         NA
## 8           NA        <NA>         NA initdesign           NA         NA
## 9  -0.03268684        <NA>      0.070  infill_cb        0.256 0.10408864
## 10 -0.09861354        <NA>      0.097  infill_cb        0.257 0.16931643
## 11 -0.04146501        <NA>      0.160  infill_cb        0.252 0.06947131
## 12 -0.04353029        <NA>      0.056  infill_cb        0.255 0.11964192
## 13 -0.01954295        <NA>      0.061  infill_cb        0.249 0.10518207
##          mean lambda
## 4          NA     NA
## 5          NA     NA
## 6          NA     NA
## 7          NA     NA
## 8          NA     NA
## 9  0.07140180      1
## 10 0.07070288      1
## 11 0.02800630      1
## 12 0.07611163      1
## 13 0.08563913      1
## $cost
## [1] 9.140613
## $gamma
## [1] -5.124399
## [1] 0.04
op = as.data.frame(res$opt.path)
plot(cummin(op$y), type = "l", ylab = "mmce", xlab = "iteration")

2 Using mlr’s tuning interface

Instead of defining an objective function where the learner’s performance is evaluated, we can make use of model-based optimization directly from mlr. We just create a TuneControl object, passing the MBOControl object to it. Then we call tuneParams to tune the hyperparameters.

Hierarchical mixed space optimization

In many cases, the hyperparameter space is not just numerical but mixed and often even hierarchical. This can easily be done out-of-the-box and needs no adaption to our previous example. (Recall that a suitable surrogate model is chosen automatically, as explained here.) To demonstrate this, we tune the cost and the kernel parameter of a SVM. When kernel takes the radial value, gamma needs to be specified. For a polynomial kernel, the degree needs to be specified.

Now we can just repeat the setup from the previous example and tune the hyperparameters.

ctrl = makeMBOControl()
ctrl = setMBOControlTermination(ctrl, iters = iters)
tune.ctrl = makeTuneControlMBO(mbo.control = ctrl)
res = tuneParams(makeLearner("classif.svm"), iris.task, cv3, par.set = par.set, control = tune.ctrl,
  show.info = FALSE)

Parallelization and multi-point proposals

We can easily add multi-point proposals and parallelize it using the parallelMap package. (Note that the chosen multicore back-end for parallelization does not work on windows machines. Please refer to the parallelization section for details on parallelization and multi point proposals.) In each iteration, we propose as many points as CPUs used for parallelization. As infill criterion we use Expected Improvement.

Usecase: Pipeline configuration

It is also possible to tune a whole machine learning pipeline, i.e. preprocessing and model configuration. The example pipeline is: * Feature filtering based on an ANOVA test or covariance, such that between 50% and 100% of the features remain. * Select either a SVM or a naive Bayes classifier. * Tune parameters of the selected classifier.

First, we define the parameter space:

Next, we create the control objects and a suitable learner, combining makeFilterWrapper() with makeModelMultiplexer(). (Please refer to the advanced tuning chapter of the mlr tutorial for details.) Afterwards, we can run tuneParams() and check the results.