Exploring Learner Predictions

Learners use features to learn a prediction function and make predictions, but the effect of those features is often not apparent. mlr can estimate the partial dependence of a learned function on a subset of the feature space using generatePartialDependenceData.

Partial dependence plots reduce the potentially high dimensional function estimated by the learner, and display a marginalized version of this function in a lower dimensional space. For example suppose , where . With pairs drawn independently from this statistical model, a learner may estimate , which, if is high dimensional, can be uninterpretable. Suppose we want to approximate the relationship between some subset of . We partition into two sets, and such that , where is a subset of of interest.

The partial dependence of on is

is integrated out. We use the following estimator:

The individual conditional expectation of an observation can also be estimated using the above algorithm absent the averaging, giving . This allows the discovery of features of that may be obscured by an aggregated summary of .

The partial derivative of the partial dependence function, , and the individual conditional expectation function, , can also be computed. For regression and survival tasks the partial derivative of a single feature is the gradient of the partial dependence function, and for classification tasks where the learner can output class probabilities the Jacobian. Note that if the learner produces discontinuous partial dependence (e.g., piecewise constant functions such as decision trees, ensembles of decision trees, etc.) the derivative will be 0 (where the function is not changing) or trending towards positive or negative infinity (at the discontinuities where the derivative is undefined). Plotting the partial dependence function of such learners may give the impression that the function is not discontinuous because the prediction grid is not composed of all discontinuous points in the predictor space. This results in a line interpolating that makes the function appear to be piecewise linear (where the derivative would be defined except at the boundaries of each piece).

The partial derivative can be informative regarding the additivity of the learned function in certain features. If is an additive function in a feature , then its partial derivative will not depend on any other features () that may have been used by the learner. Variation in the estimated partial derivative indicates that there is a region of interaction between and in . Similarly, instead of using the mean to estimate the expected value of the function at different values of , instead computing the variance can highlight regions of interaction between and .

See Goldstein, Kapelner, Bleich, and Pitkin (2014) for more details and their package ICEbox for the original implementation. The algorithm works for any supervised learner with classification, regression, and survival tasks.

Generating partial dependences

Our implementation, following mlr's visualization pattern, consists of the above mentioned function generatePartialDependenceData, as well as two visualization functions, plotPartialDependence and plotPartialDependenceGGVIS. The former generates input (objects of class PartialDependenceData) for the latter.

The first step executed by generatePartialDependenceData is to generate a feature grid for every element of the character vector features passed. The data are given by the input argument, which can be a Task or a data.frame. The feature grid can be generated in several ways. A uniformly spaced grid of length gridsize (default 10) from the empirical minimum to the empirical maximum is created by default, but arguments fmin and fmax may be used to override the empirical default (the lengths of fmin and fmax must match the length of features). Alternatively the feature data can be resampled, either by using a bootstrap or by subsampling.

lrn.classif = makeLearner("classif.ksvm", predict.type = "prob")
fit.classif = train(lrn.classif, iris.task)
pd = generatePartialDependenceData(fit.classif, iris.task, "Petal.Width")
pd
#> PartialDependenceData
#> Task: iris-example
#> Features: Petal.Width
#> Target: setosa, versicolor, virginica
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: FALSE
#>     Class Probability Petal.Width
#> 1: setosa   0.4983925   0.1000000
#> 2: setosa   0.4441165   0.3666667
#> 3: setosa   0.3808075   0.6333333
#> 4: setosa   0.3250243   0.9000000
#> 5: setosa   0.2589014   1.1666667
#> 6: setosa   0.1870692   1.4333333
#> ... (#rows: 30, #cols: 3)

As noted above, does not have to be unidimensional. If it is not, the interaction flag must be set to TRUE. Then the individual feature grids are combined using the Cartesian product, and the estimator above is applied, producing the partial dependence for every combination of unique feature values. If the interaction flag is FALSE (the default) then by default is assumed unidimensional, and partial dependencies are generated for each feature separately. The resulting output when interaction = FALSE has a column for each feature, and NA where the feature was not used.

pd.lst = generatePartialDependenceData(fit.classif, iris.task, c("Petal.Width", "Petal.Length"), FALSE)
head(pd.lst$data)
#>     Class Probability Petal.Width Petal.Length
#> 1: setosa   0.4983925   0.1000000           NA
#> 2: setosa   0.4441165   0.3666667           NA
#> 3: setosa   0.3808075   0.6333333           NA
#> 4: setosa   0.3250243   0.9000000           NA
#> 5: setosa   0.2589014   1.1666667           NA
#> 6: setosa   0.1870692   1.4333333           NA

tail(pd.lst$data)
#>        Class Probability Petal.Width Petal.Length
#> 1: virginica   0.2006336          NA     3.622222
#> 2: virginica   0.3114545          NA     4.277778
#> 3: virginica   0.4404613          NA     4.933333
#> 4: virginica   0.6005358          NA     5.588889
#> 5: virginica   0.7099841          NA     6.244444
#> 6: virginica   0.7242584          NA     6.900000
pd.int = generatePartialDependenceData(fit.classif, iris.task, c("Petal.Width", "Petal.Length"), TRUE)
pd.int
#> PartialDependenceData
#> Task: iris-example
#> Features: Petal.Width, Petal.Length
#> Target: setosa, versicolor, virginica
#> Derivative: FALSE
#> Interaction: TRUE
#> Individual: FALSE
#>     Class Probability Petal.Width Petal.Length
#> 1: setosa   0.6885025         0.1     1.000000
#> 2: setosa   0.6818751         0.1     1.655556
#> 3: setosa   0.6395601         0.1     2.311111
#> 4: setosa   0.5564031         0.1     2.966667
#> 5: setosa   0.4418615         0.1     3.622222
#> 6: setosa   0.3389385         0.1     4.277778
#> ... (#rows: 300, #cols: 4)

At each step in the estimation of a set of predictions of length is generated. By default the mean prediction is used. For classification where predict.type = "prob" this entails the mean class probabilities. However, other summaries of the predictions may be used. For regression and survival tasks the function used here must either return one number or three, and, if the latter, the numbers must be sorted lowest to highest. For classification tasks the function must return a number for each level of the target feature.

As noted, the fun argument can be a function which returns three numbers (sorted low to high) for a regression task. This allows further exploration of relative feature importance. If a feature is relatively important, the bounds are necessarily tighter because the feature accounts for more of the variance of the predictions, i.e., it is "used" more by the learner. More directly setting fun = var identifies regions of interaction between and .

lrn.regr = makeLearner("regr.ksvm")
fit.regr = train(lrn.regr, bh.task)
pd.regr = generatePartialDependenceData(fit.regr, bh.task, "lstat", fun = median)
pd.regr
#> PartialDependenceData
#> Task: BostonHousing-example
#> Features: lstat
#> Target: medv
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: FALSE
#>        medv     lstat
#> 1: 24.83809  1.730000
#> 2: 23.73178  5.756667
#> 3: 22.34749  9.783333
#> 4: 20.70444 13.810000
#> 5: 19.60629 17.836667
#> 6: 19.06079 21.863333
#> ... (#rows: 10, #cols: 2)
pd.ci = generatePartialDependenceData(fit.regr, bh.task, "lstat",
  fun = function(x) quantile(x, c(.25, .5, .75)))
pd.ci
#> PartialDependenceData
#> Task: BostonHousing-example
#> Features: lstat
#> Target: medv
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: FALSE
#>        medv Function     lstat
#> 1: 21.39800 medv.25%  1.730000
#> 2: 20.83795 medv.25%  5.756667
#> 3: 19.95342 medv.25%  9.783333
#> 4: 18.71333 medv.25% 13.810000
#> 5: 16.52396 medv.25% 17.836667
#> 6: 15.00419 medv.25% 21.863333
#> ... (#rows: 30, #cols: 3)
pd.classif = generatePartialDependenceData(fit.classif, iris.task, "Petal.Length", fun = median)
pd.classif
#> PartialDependenceData
#> Task: iris-example
#> Features: Petal.Length
#> Target: setosa, versicolor, virginica
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: FALSE
#>     Class Probability Petal.Length
#> 1: setosa  0.31008788     1.000000
#> 2: setosa  0.24271454     1.655556
#> 3: setosa  0.17126036     2.311111
#> 4: setosa  0.09380787     2.966667
#> 5: setosa  0.04579912     3.622222
#> 6: setosa  0.02455344     4.277778
#> ... (#rows: 30, #cols: 3)

In addition to bounds based on a summary of the distribution of the conditional expectation of each observation, learners which can estimate the variance of their predictions can also be used. The argument bounds is a numeric vector of length two which is added (so the first number should be negative) to the point prediction to produce a confidence interval for the partial dependence. The default is the .025 and .975 quantiles of the Gaussian distribution.

fit.se = train(makeLearner("regr.randomForest", predict.type = "se"), bh.task)
pd.se = generatePartialDependenceData(fit.se, bh.task, c("lstat", "crim"))
head(pd.se$data)
#>       lower     medv    upper     lstat crim
#> 1: 12.44298 31.31098 50.17897  1.730000   NA
#> 2: 14.20191 26.24690 38.29188  5.756667   NA
#> 3: 13.25486 23.56126 33.86766  9.783333   NA
#> 4: 14.21288 22.07362 29.93437 13.810000   NA
#> 5: 12.82489 20.39223 27.95958 17.836667   NA
#> 6: 11.58655 19.68915 27.79176 21.863333   NA

tail(pd.se$data)
#>       lower     medv    upper lstat     crim
#> 1: 10.68572 22.00346 33.32119    NA 39.54849
#> 2: 10.67112 21.99526 33.31940    NA 49.43403
#> 3: 10.60843 21.97053 33.33264    NA 59.31957
#> 4: 10.60916 21.97083 33.33249    NA 69.20512
#> 5: 10.60922 21.97102 33.33281    NA 79.09066
#> 6: 10.60922 21.97102 33.33281    NA 88.97620

As previously mentioned if the aggregation function is not used, i.e., it is the identity, then the conditional expectation of is estimated. If individual = TRUE then generatePartialDependenceData returns partial dependence estimates made at each point in the prediction grid constructed from the features.

pd.ind.regr = generatePartialDependenceData(fit.regr, bh.task, "lstat", individual = TRUE)
pd.ind.regr
#> PartialDependenceData
#> Task: BostonHousing-example
#> Features: lstat
#> Target: medv
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: TRUE
#>        medv n     lstat
#> 1: 25.82788 1  1.730000
#> 2: 25.24664 1  5.756667
#> 3: 24.49344 1  9.783333
#> 4: 23.64500 1 13.810000
#> 5: 22.74246 1 17.836667
#> 6: 21.82257 1 21.863333
#> ... (#rows: 5060, #cols: 3)

The resulting output, particularly the element data in the returned object, has an additional column idx which gives the index of the observation to which the row pertains.

For classification tasks this index references both the class and the observation index.

pd.ind.classif = generatePartialDependenceData(fit.classif, iris.task, "Petal.Length", individual = TRUE)
pd.ind.classif
#> PartialDependenceData
#> Task: iris-example
#> Features: Petal.Length
#> Target: setosa, versicolor, virginica
#> Derivative: FALSE
#> Interaction: FALSE
#> Individual: TRUE
#>     Class Probability n Petal.Length
#> 1: setosa  0.27030106 1     1.000000
#> 2: setosa  0.22405269 1     1.655556
#> 3: setosa  0.14030171 1     2.311111
#> 4: setosa  0.06341523 1     2.966667
#> 5: setosa  0.02663704 1     3.622222
#> 6: setosa  0.01513396 1     4.277778
#> ... (#rows: 4500, #cols: 4)

Partial derivatives can also be computed for individual partial dependence estimates and aggregate partial dependence. This is restricted to a single feature at a time. The derivatives of individual partial dependence estimates can be useful in finding regions of interaction between the feature for which the derivative is estimated and the features excluded.

pd.regr.der = generatePartialDependenceData(fit.regr, bh.task, "lstat", derivative = TRUE)
head(pd.regr.der$data)
#>          medv     lstat
#> 1: -0.2254826  1.730000
#> 2: -0.3552918  5.756667
#> 3: -0.4286202  9.783333
#> 4: -0.4349631 13.810000
#> 5: -0.3810348 17.836667
#> 6: -0.2837255 21.863333
pd.regr.der.ind = generatePartialDependenceData(fit.regr, bh.task, "lstat", derivative = TRUE,
  individual = TRUE)
head(pd.regr.der.ind$data)
#>          medv   n     lstat
#> 1: -0.1942015 250  1.730000
#> 2: -0.3268480 250  5.756667
#> 3: -0.3886969 250  9.783333
#> 4: -0.3745454 250 13.810000
#> 5: -0.3036274 250 17.836667
#> 6: -0.2067932 250 21.863333
pd.classif.der = generatePartialDependenceData(fit.classif, iris.task, "Petal.Width", derivative = TRUE)
head(pd.classif.der$data)
#>     Class Probability Petal.Width
#> 1: setosa  -0.1479385   0.1000000
#> 2: setosa  -0.2422728   0.3666667
#> 3: setosa  -0.2189893   0.6333333
#> 4: setosa  -0.2162803   0.9000000
#> 5: setosa  -0.2768042   1.1666667
#> 6: setosa  -0.2394176   1.4333333
pd.classif.der.ind = generatePartialDependenceData(fit.classif, iris.task, "Petal.Width", derivative = TRUE, individual = TRUE)
head(pd.classif.der.ind$data)
#>     Class Probability   n Petal.Width
#> 1: setosa -0.15676127 125   0.1000000
#> 2: setosa -0.25561911 125   0.3666667
#> 3: setosa -0.26219088 125   0.6333333
#> 4: setosa -0.15195259 125   0.9000000
#> 5: setosa -0.05714870 125   1.1666667
#> 6: setosa -0.02776717 125   1.4333333

Plotting partial dependences

Results from generatePartialDependenceData and generateFunctionalANOVAData can be visualized with plotPartialDependence and plotPartialDependenceGGVIS.

With one feature and a regression task the output is a line plot, with a point for each point in the corresponding feature's grid.

plotPartialDependence(pd.regr)

plot of chunk unnamed-chunk-14

With a classification task, a line is drawn for each class, which gives the estimated partial probability of that class for a particular point in the feature grid.

plotPartialDependence(pd.classif)

plot of chunk unnamed-chunk-15

For regression tasks, when the fun argument of generatePartialDependenceData is used, the bounds will automatically be displayed using a gray ribbon.

plotPartialDependence(pd.ci)

plot of chunk unnamed-chunk-16

The same goes for plots of partial dependences where the learner has predict.type = "se".

plotPartialDependence(pd.se)

plot of chunk unnamed-chunk-17

When multiple features are passed to generatePartialDependenceData but interaction = FALSE, facetting is used to display each estimated bivariate relationship.

plotPartialDependence(pd.lst)

plot of chunk unnamed-chunk-18

When interaction = TRUE in the call to generatePartialDependenceData, one variable must be chosen to be used for facetting, and a subplot for each value in the chosen feature's grid is created, wherein the other feature's partial dependences within the facetting feature's value are shown. Note that this type of plot is limited to two features.

plotPartialDependence(pd.int, facet = "Petal.Length")

plot of chunk unnamed-chunk-19

plotPartialDependenceGGVIS can be used similarly, however, since ggvis currently lacks subplotting/facetting capabilities, the argument interact maps one feature to an interactive sidebar where the user can select a value of one feature.

plotPartialDependenceGGVIS(pd.int, interact = "Petal.Length")

When individual = TRUE each individual conditional expectation curve is plotted.

plotPartialDependence(pd.ind.regr)