Survival Cox Model with Cross-Validation Likelihood Based Boosting Learner
mlr_learners_surv.cv_coxboost.Rd
Fits a survival Cox model using likelihood based boosting and interal cross-validation for the
number of steps.
Calls CoxBoost::CoxBoost()
or CoxBoost::cv.CoxBoost()
from package 'CoxBoost'.
Details
Use LearnerSurvCoxboost and LearnerSurvCVCoxboost for Cox boosting without and with internal
cross-validation of boosting step number, respectively. Tuning using the internal optimizer in
LearnerSurvCVCoxboost may be more efficient when tuning stepno
only. However, for tuning
multiple hyperparameters, mlr3tuning and LearnerSurvCoxboost will likely give better
results.
If penalty == "optimCoxBoostPenalty"
then CoxBoost::optimCoxBoostPenalty is used to determine
the penalty value to be used in CoxBoost::cv.CoxBoost.
Three prediction types are returned for this learner, using the internal
predict.CoxBoost()
function:
lp
: a vector of linear predictors (relative risk scores), one per observation.crank
: same aslp
.distr
: a 2d survival matrix, with observations as rows and time points as columns. The internal transformation uses the Breslow estimator to compose the survival distributions from thelp
predictions.
Dictionary
This Learner can be instantiated via the dictionary mlr_learners or with the associated sugar function lrn()
:
$get("surv.cv_coxboost")
mlr_learnerslrn("surv.cv_coxboost")
Meta Information
Task type: “surv”
Predict Types: “crank”, “distr”, “lp”
Feature Types: “integer”, “numeric”
Required Packages: mlr3, mlr3proba, mlr3extralearners, CoxBoost, pracma
Parameters
Id | Type | Default | Levels | Range |
maxstepno | integer | 100 | \([0, \infty)\) | |
K | integer | 10 | \([2, \infty)\) | |
type | character | verweij | verweij, naive | - |
folds | untyped | - | ||
minstepno | integer | 50 | \([0, \infty)\) | |
start.penalty | numeric | - | \((-\infty, \infty)\) | |
iter.max | integer | 10 | \([1, \infty)\) | |
upper.margin | numeric | 0.05 | \([0, 1]\) | |
unpen.index | untyped | - | - | |
standardize | logical | TRUE | TRUE, FALSE | - |
penalty | numeric | - | \((-\infty, \infty)\) | |
criterion | character | pscore | pscore, score, hpscore, hscore | - |
stepsize.factor | numeric | 1 | \((-\infty, \infty)\) | |
sf.scheme | character | sigmoid | sigmoid, linear | - |
pendistmat | untyped | - | - | |
connected.index | untyped | - | - | |
x.is.01 | logical | FALSE | TRUE, FALSE | - |
return.score | logical | TRUE | TRUE, FALSE | - |
trace | logical | FALSE | TRUE, FALSE | - |
at.step | untyped | - | - |
Installation
The package 'CoxBoost' is not on CRAN and has to be installed from GitHub using
remotes::install_github("binderh/CoxBoost")
.
References
Binder, Harald, Allignol, Arthur, Schumacher, Martin, Beyersmann, Jan (2009). “Boosting for high-dimensional time-to-event data with competing risks.” Bioinformatics, 25(7), 890--896.
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3proba::LearnerSurv
-> LearnerSurvCVCoxboost
Methods
Method selected_features()
Returns the set of selected features which have non-zero coefficients.
Calls the internal coef.CoxBoost()
function.
Arguments
at_step
(
integer(1)
)
Which boosting step to get the coefficients for. If no step is given (default), the final boosting step is used.
Returns
(character()
) vector of feature names.
Examples
learner = mlr3::lrn("surv.cv_coxboost")
print(learner)
#> <LearnerSurvCVCoxboost:surv.cv_coxboost>: Likelihood-based Boosting
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3proba, mlr3extralearners, CoxBoost, pracma
#> * Predict Types: crank, [distr], lp
#> * Feature Types: integer, numeric
#> * Properties: selected_features, weights
# available parameters:
learner$param_set$ids()
#> [1] "maxstepno" "K" "type" "folds"
#> [5] "minstepno" "start.penalty" "iter.max" "upper.margin"
#> [9] "unpen.index" "standardize" "penalty" "criterion"
#> [13] "stepsize.factor" "sf.scheme" "pendistmat" "connected.index"
#> [17] "x.is.01" "return.score" "trace" "at.step"