# Parameter tuning with mlrHyperopt

Hyperparameter tuning with mlr is rich in options as they are multiple tuning methods:

• Simple Random Search
• Grid Search
• Iterated F-Racing (via irace)
• Sequential Model-Based Optimization (via mlrMBO)

Also the search space is easily definable and customizable for each of the 60+ learners of mlr using the ParamSets from the ParamHelpers Package.

The only drawback and shortcoming of mlr in comparison to caret in this regard is that mlr itself does not have defaults for the search spaces. This is where mlrHyperopt comes into play.

mlrHyperopt offers

• default search spaces for the most important learners in mlr,
• parameter tuning in one line of code,
• and an API to add and access custom search spaces from the mlrHyperopt Database.

### Tuning in one line

Tuning can be done in one line relying on the defaults. The default will automatically minimize the missclassification rate.

We can find out what hyperopt did by inspecting the res object.

Depending on the parameter space mlrHyperopt will automatically decide for a suitable tuning method:

As the search space defined in the ParamSet is only numeric, sequential Bayesian optimization was chosen. We can look into the evaluated parameter configurations and we can visualize the optimization run.

The upper left plot shows the distribution of the tried settings in the search space and contour lines indicate where regions of good configurations are located. The lower right plot shows the value of the objective (the miss-classification rate) and how it decreases over the time. This also shows nicely that wrong settings can lead to bad results.

### Using the mlrHyperopt API with mlr

If you just want to use mlrHyperopt to access the default parameter search spaces from the Often you don’t want to rely on the default procedures of mlrHyperopt and just incorporate it into your mlr-workflow. Here is one example how you can use the default search spaces for an easy benchmark:

As we can see we were able to improve the performance of xgboost and the nnet without any additional knowledge on what parameters we should tune. Especially for nnet improved performance is noticable.