Skip to contents

Details

Custom nets can be used in this learner either using the survivalmodels::build_pytorch_net utility function or using torch via reticulate. The number of output channels depends on the number of discretised time-points, i.e. the parameters cuts or cutpoints.

Dictionary

This Learner can be instantiated via the dictionary mlr_learners or with the associated sugar function lrn():

mlr_learners$get("surv.deephit")
lrn("surv.deephit")

Meta Information

Parameters

IdTypeDefaultLevelsRange
fracnumeric0\([0, 1]\)
cutsinteger10\([1, \infty)\)
cutpointslist-\((-\infty, \infty)\)
schemecharacterequidistantequidistant, quantiles\((-\infty, \infty)\)
cut_minnumeric0\([0, \infty)\)
num_nodeslist32, 32\((-\infty, \infty)\)
batch_normlogicalTRUETRUE, FALSE\((-\infty, \infty)\)
dropoutnumeric-\([0, 1]\)
activationcharacterrelucelu, elu, gelu, glu, hardshrink, hardsigmoid, hardswish, hardtanh, relu6, leakyrelu, ...\((-\infty, \infty)\)
custom_netlist-\((-\infty, \infty)\)
devicelist-\((-\infty, \infty)\)
mod_alphanumeric0.2\([0, 1]\)
sigmanumeric0.1\([0, \infty)\)
shrinknumeric0\([0, \infty)\)
optimizercharacteradamadadelta, adagrad, adam, adamax, adamw, asgd, rmsprop, rprop, sgd, sparse_adam\((-\infty, \infty)\)
rhonumeric0.9\((-\infty, \infty)\)
epsnumeric1e-08\((-\infty, \infty)\)
lrnumeric1\((-\infty, \infty)\)
weight_decaynumeric0\((-\infty, \infty)\)
learning_ratenumeric0.01\((-\infty, \infty)\)
lr_decaynumeric0\((-\infty, \infty)\)
betaslist0.900, 0.999\((-\infty, \infty)\)
amsgradlogicalFALSETRUE, FALSE\((-\infty, \infty)\)
lambdnumeric1e-04\([0, \infty)\)
alphanumeric0.75\([0, \infty)\)
t0numeric1e+06\((-\infty, \infty)\)
momentumnumeric0\((-\infty, \infty)\)
centeredlogicalTRUETRUE, FALSE\((-\infty, \infty)\)
etaslist0.5, 1.2\((-\infty, \infty)\)
step_sizeslist1e-06, 5e+01\((-\infty, \infty)\)
dampeningnumeric0\((-\infty, \infty)\)
nesterovlogicalFALSETRUE, FALSE\((-\infty, \infty)\)
batch_sizeinteger256\((-\infty, \infty)\)
epochsinteger1\([1, \infty)\)
verboselogicalTRUETRUE, FALSE\((-\infty, \infty)\)
num_workersinteger0\((-\infty, \infty)\)
shufflelogicalTRUETRUE, FALSE\((-\infty, \infty)\)
best_weightslogicalFALSETRUE, FALSE\((-\infty, \infty)\)
early_stoppinglogicalFALSETRUE, FALSE\((-\infty, \infty)\)
min_deltanumeric0\((-\infty, \infty)\)
patienceinteger10\((-\infty, \infty)\)
interpolatelogicalFALSETRUE, FALSE\((-\infty, \infty)\)
inter_schemecharacterconst_hazardconst_hazard, const_pdf\((-\infty, \infty)\)
subinteger10\([1, \infty)\)

References

Changhee Lee, William R Zame, Jinsung Yoon, and Mihaela van der Schaar. Deephit: A deep learning approach to survival analysis with competing risks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. http://medianetlab.ee.ucla.edu/papers/AAAI_2018_DeepHit

See also

Author

RaphaelS1

Super classes

mlr3::Learner -> mlr3proba::LearnerSurv -> LearnerSurvDeephit

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerSurvDeephit$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

if (requireNamespace("survivalmodels", quietly = TRUE) && requireNamespace("distr6", quietly = TRUE) && requireNamespace("reticulate", quietly = TRUE)) {
  learner = mlr3::lrn("surv.deephit")
  print(learner)

  # available parameters:
  learner$param_set$ids()
}
#> <LearnerSurvDeephit:surv.deephit>
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3extralearners, survivalmodels, distr6, reticulate
#> * Predict Type: crank
#> * Feature types: integer, numeric
#> * Properties: -
#>  [1] "frac"           "cuts"           "cutpoints"      "scheme"        
#>  [5] "cut_min"        "num_nodes"      "batch_norm"     "dropout"       
#>  [9] "activation"     "custom_net"     "device"         "mod_alpha"     
#> [13] "sigma"          "shrink"         "optimizer"      "rho"           
#> [17] "eps"            "lr"             "weight_decay"   "learning_rate" 
#> [21] "lr_decay"       "betas"          "amsgrad"        "lambd"         
#> [25] "alpha"          "t0"             "momentum"       "centered"      
#> [29] "etas"           "step_sizes"     "dampening"      "nesterov"      
#> [33] "batch_size"     "epochs"         "verbose"        "num_workers"   
#> [37] "shuffle"        "best_weights"   "early_stopping" "min_delta"     
#> [41] "patience"       "interpolate"    "inter_scheme"   "sub"