Active Learning Optimizer Factory
Source:R/optimizer_active_learning.R
optimizer_active_learning.RdConvenience constructor that wires together an OptimizerAL for uncertainty-based active learning with optional multipoint proposal heuristics.
Arguments
- learner
(mlr3::LearnerRegr)
Base regression learner used as the surrogate.- se_method
(
character(1))
How to obtain standard errors:"auto": use native"se"if supported bylearner, otherwise"bootstrap"."bootstrap": wrap via LearnerRegrBootstrapSE."quantile": wrap via LearnerRegrQuantileSE (requires"quantiles"support).
- n_bootstrap
(
integer(1))
Number of bootstrap replicates for"bootstrap". Ignored otherwise.- batch_size
(
integer(1))
Number of points proposed per active-learning iteration.- multipoint_method
(
character(1))
Batch selection strategy:"greedy": top-k by acquisition score"local_penalization": sequential local-penalization heuristic"diversity": sequential score/diversity trade-off"constant_liar": sequential pseudo-label batching
- acq_optimizer
(bbotk::Optimizer | mlr3mbo::AcqOptimizer)
Optimizer used to choose the candidate-generation strategy for acquisition scoring. The current implementation translates common optimizers to a SpaceSampler and ignores optimizer-specific search logic.- acq_evals
(
integer(1))
Number of candidate points scored per proposal round.
Value
Configured OptimizerAL.
Details
This helper builds an active-learning optimizer around:
an uncertainty acquisition function (
"sd")a surrogate that can provide standard errors (either native
"se", LearnerRegrBootstrapSE, or LearnerRegrQuantileSE)proposer-based batch construction via ALProposerScore, ALProposerSequentialScore, or ALProposerPseudoLabel
acq_evals controls the size of the candidate pool scored in each
proposal round. For continuous search spaces, candidates are sampled from the
search space using a coarse translation of acq_optimizer to a
SpaceSampler. For finite pools, the same sampler is applied to the
remaining pool.