Skip to contents

Details

Custom nets can be used in this learner either using the survivalmodels::build_pytorch_net utility function or using torch via reticulate. The number of output channels depends on the number of discretised time-points, i.e. the parameters cuts or cutpoints.

Dictionary

This Learner can be instantiated via the dictionary mlr_learners or with the associated sugar function lrn():

mlr_learners$get("surv.loghaz")
lrn("surv.loghaz")

Meta Information

Parameters

IdTypeDefaultLevelsRange
fracnumeric0\([0, 1]\)
cutsinteger10\([1, \infty)\)
cutpointslist-\((-\infty, \infty)\)
schemecharacterequidistantequidistant, quantiles\((-\infty, \infty)\)
cut_minnumeric0\([0, \infty)\)
num_nodeslist32, 32\((-\infty, \infty)\)
batch_normlogicalTRUETRUE, FALSE\((-\infty, \infty)\)
dropoutnumeric-\([0, 1]\)
activationcharacterrelucelu, elu, gelu, glu, hardshrink, hardsigmoid, hardswish, hardtanh, relu6, leakyrelu, ...\((-\infty, \infty)\)
custom_netlist-\((-\infty, \infty)\)
devicelist-\((-\infty, \infty)\)
shrinknumeric0\([0, \infty)\)
optimizercharacteradamadadelta, adagrad, adam, adamax, adamw, asgd, rmsprop, rprop, sgd, sparse_adam\((-\infty, \infty)\)
rhonumeric0.9\((-\infty, \infty)\)
epsnumeric1e-08\((-\infty, \infty)\)
lrnumeric1\((-\infty, \infty)\)
weight_decaynumeric0\((-\infty, \infty)\)
learning_ratenumeric0.01\((-\infty, \infty)\)
lr_decaynumeric0\((-\infty, \infty)\)
betaslist0.900, 0.999\((-\infty, \infty)\)
amsgradlogicalFALSETRUE, FALSE\((-\infty, \infty)\)
lambdnumeric1e-04\([0, \infty)\)
alphanumeric0.75\([0, \infty)\)
t0numeric1e+06\((-\infty, \infty)\)
momentumnumeric0\((-\infty, \infty)\)
centeredlogicalTRUETRUE, FALSE\((-\infty, \infty)\)
etaslist0.5, 1.2\((-\infty, \infty)\)
step_sizeslist1e-06, 5e+01\((-\infty, \infty)\)
dampeningnumeric0\((-\infty, \infty)\)
nesterovlogicalFALSETRUE, FALSE\((-\infty, \infty)\)
batch_sizeinteger256\((-\infty, \infty)\)
epochsinteger1\([1, \infty)\)
verboselogicalTRUETRUE, FALSE\((-\infty, \infty)\)
num_workersinteger0\((-\infty, \infty)\)
shufflelogicalTRUETRUE, FALSE\((-\infty, \infty)\)
best_weightslogicalFALSETRUE, FALSE\((-\infty, \infty)\)
early_stoppinglogicalFALSETRUE, FALSE\((-\infty, \infty)\)
min_deltanumeric0\((-\infty, \infty)\)
patienceinteger10\((-\infty, \infty)\)
interpolatelogicalFALSETRUE, FALSE\((-\infty, \infty)\)
inter_schemecharacterconst_hazardconst_hazard, const_pdf\((-\infty, \infty)\)
subinteger10\([1, \infty)\)

References

Gensheimer, M. F., & Narasimhan, B. (2018). A Simple Discrete-Time Survival Model for Neural Networks, 1–17. https://doi.org/arXiv:1805.00917v3

Kvamme, H., & Borgan, Ø. (2019). Continuous and discrete-time survival prediction with neural networks. https://doi.org/arXiv:1910.06724.

See also

Author

RaphaelS1

Super classes

mlr3::Learner -> mlr3proba::LearnerSurv -> LearnerSurvLogisticHazard

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerSurvLogisticHazard$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

if (requireNamespace("survivalmodels", quietly = TRUE) && requireNamespace("distr6", quietly = TRUE) && requireNamespace("reticulate", quietly = TRUE)) {
  learner = mlr3::lrn("surv.loghaz")
  print(learner)

  # available parameters:
  learner$param_set$ids()
}
#> <LearnerSurvLogisticHazard:surv.loghaz>
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3extralearners, survivalmodels, distr6, reticulate
#> * Predict Type: crank
#> * Feature types: integer, numeric
#> * Properties: -
#>  [1] "frac"           "cuts"           "cutpoints"      "scheme"        
#>  [5] "cut_min"        "num_nodes"      "batch_norm"     "dropout"       
#>  [9] "activation"     "custom_net"     "device"         "shrink"        
#> [13] "optimizer"      "rho"            "eps"            "lr"            
#> [17] "weight_decay"   "learning_rate"  "lr_decay"       "betas"         
#> [21] "amsgrad"        "lambd"          "alpha"          "t0"            
#> [25] "momentum"       "centered"       "etas"           "step_sizes"    
#> [29] "dampening"      "nesterov"       "batch_size"     "epochs"        
#> [33] "verbose"        "num_workers"    "shuffle"        "best_weights"  
#> [37] "early_stopping" "min_delta"      "patience"       "interpolate"   
#> [41] "inter_scheme"   "sub"