We are happy to announce that mlr3spatial has been released on CRAN in November 2021.
We are happy to announce that mlr3spatial has been released on CRAN in November 2021. mlr3spatial simplifies the handling of spatial objects in the mlr3 ecosystem. Before mlr3spatial, the user had to extract tabular data from spatial objects to train a model or predict spatial data.
Now, mlr3 Tasks can directly read from spatial objects via specialized Data Backends. Such tasks can be used to train a model or to perform resampling just like any other mlr3
task. We support spatial raster objects created by the terra, raster and stars packages with DataBackendRaster
. Additionally, vector data created with the sf package is handled with DataBackendVector
.
The predict_raster()
function creates spatial rasters and features with predictions from mlr3 learners. We only have to pass a task with a spatial data backend which provides the data and spatial reference. To avoid memory issues with large raster files, prediction is done in chunks. For this, the raster map is divided into multiple horizontal strips. The vertical extension of these strips is controlled by the chunksize
parameter. The actual memory usage per core is a multiple of the specified chunk size. We choose a default chunk size of 200 Megabytes which should work on most consumer grade machines. If more memory is available, a larger chunk size accelerates the prediction process.
One after the other, the raster chunks are loaded into memory and the prediction is written to disk. Finally, the complete raster is available on disk. The learner can also make use of future-based parallelization to accelerate the predictions. The vignette on “Benchmarking parallel predictions” showcases the parallelization capabilities of mlr3spatial.
First, the TIFF files is read via stars::read_stars()
and put into a DataBackendRaster
. The DataBackend is then used to create a regression task with the response being layer.1
.
<TaskRegr:backend> (122848 x 6)
* Target: band1
* Properties: -
* Features (5):
- dbl (5): band2, band3, band4, band5, band6
For large raster files with millions of values it helps to predict in parallel. To enable this, set learner$parallel_predict = TRUE
and initiate a parallel plan via {future}, e.g. via plan("multisession")
. Since this is only an example, parallelization is not enabled here. Here we will use a simple regression tree as an example learner. In practice you might want to use a different learner - you can find an overview of available learners here.
INFO [11:35:06.432] 447
INFO [11:35:06.496] 447
<LearnerRegrRpart:regr.rpart>
* Model: rpart
* Parameters: xval=0
* Packages: mlr3, rpart
* Predict Type: response
* Feature types: logical, integer, numeric, factor, ordered
* Properties: importance, missings, selected_features, weights
For prediction predict_spatial()
is used. It will return a raster file which contains the predictions. Users can select which R spatial format the returned raster should have. Note that by default mlr3spatial
operates with SpatRaster
objects from package terra
. If a different output format is requested (e.g. "stars"
), coercion is happening in the background which might take some time.
INFO [11:35:06.827] Start raster prediction
INFO [11:35:06.831] Prediction is executed with a chunksize of 200, 1 chunk(s) in total, 122848 values per chunk
INFO [11:35:06.898] 414
INFO [11:35:07.113] 400
INFO [11:35:07.119] Chunk 1 of 1 finished
INFO [11:35:07.132] Finished raster prediction in 0 seconds
stars object with 2 dimensions and 1 attribute
attribute(s):
Min. 1st Qu. Median Mean 3rd Qu. Max.
cadmium 62.3629 70.30233 77.01695 79.05135 89.2809 118.1429
dimension(s):
from to offset delta refsys point values x/y
x 1 349 288776 28.5 SIRGAS 2000 / UTM zone 25S FALSE NULL [x]
y 1 352 9120761 -28.5 SIRGAS 2000 / UTM zone 25S FALSE NULL [y]
Finally we can plot the predictions. The color vector is extracted from the “viridis” color palette via dput(viridis::viridis_pal()(5))
and provided to the S3 plot()
call, which makes use of the S3 plot method within the stars
package in this scenario.
For attribution, please cite this work as
Becker & Schratz (2021, Dec. 1). mlr-org: Announcing mlr3spatial. Retrieved from https://mlr-org.github.io/mlr-org-website/posts/2021-12-01-announcing-mlr3spatial/
BibTeX citation
@misc{becker2021announcing, author = {Becker, Marc and Schratz, Patrick}, title = {mlr-org: Announcing mlr3spatial}, url = {https://mlr-org.github.io/mlr-org-website/posts/2021-12-01-announcing-mlr3spatial/}, year = {2021} }