Title: | Models for Survival Analysis |
Version: | 0.1.191 |
Description: | Implementations of classical and machine learning models for survival analysis, including deep neural networks via 'keras' and 'tensorflow'. Each model includes a separated fit and predict interface with consistent prediction types for predicting risk or survival probabilities. Models are either implemented from 'Python' via 'reticulate' https://CRAN.R-project.org/package=reticulate, from code in GitHub packages, or novel implementations using 'Rcpp' https://CRAN.R-project.org/package=Rcpp. Neural networks are implemented from the 'Python' package 'pycox' https://github.com/havakv/pycox. |
License: | MIT + file LICENSE |
URL: | https://github.com/RaphaelS1/survivalmodels/ |
BugReports: | https://github.com/foucher-y/survivalmodels/issues |
Imports: | Rcpp (≥ 1.0.5) |
Suggests: | keras (≥ 2.11.0), pseudo, reticulate, survival |
LinkingTo: | Rcpp |
Encoding: | UTF-8 |
NeedsCompilation: | yes |
Packaged: | 2024-03-18 21:10:19 UTC; foucher-y |
Author: | Raphael Sonabend |
Maintainer: | Yohann Foucher <yohann.foucher@univ-poitiers.fr> |
Repository: | CRAN |
Date/Publication: | 2024-03-19 16:50:03 UTC |
survivalmodels: Models for Survival Analysis
Description
survivalmodels implements classical and machine learning models for survival analysis that either do not already exist in R or for more efficient implementations.
Author(s)
Maintainer: Yohann Foucher yohann.foucher@univ-poitiers.fr (ORCID) Authors:
Raphael Sonabend (ORCID)
See Also
Useful links:
Report bugs at https://github.com/foucher-y/survivalmodels/issues
Build a Keras Multilayer Perceptron
Description
Utility function to build a Keras MLP.
Usage
build_keras_net(
n_in,
n_out,
nodes = c(32L, 32L),
layer_pars = list(),
activation = "linear",
act_pars = list(),
dropout = 0.1,
batch_norm = TRUE,
batch_pars = list()
)
Arguments
n_in |
|
n_out |
|
nodes |
|
layer_pars |
|
activation |
|
act_pars |
|
dropout |
|
batch_norm |
|
batch_pars |
|
Details
This function is a helper for R users with less Python experience. Currently it is limited to simple MLPs and with identical layers. More advanced networks will require manual creation with keras.
Value
No return value.
Build a Pytorch Multilayer Perceptron
Description
Utility function to build an MLP with a choice of activation function and weight initialization with optional dropout and batch normalization.
Usage
build_pytorch_net(
n_in,
n_out,
nodes = c(32, 32),
activation = "relu",
act_pars = list(),
dropout = 0.1,
bias = TRUE,
batch_norm = TRUE,
batch_pars = list(eps = 1e-05, momentum = 0.1, affine = TRUE),
init = "uniform",
init_pars = list()
)
Arguments
n_in |
|
n_out |
|
nodes |
|
activation |
|
act_pars |
|
dropout |
|
bias |
|
batch_norm |
|
batch_pars |
|
init |
|
init_pars |
|
Details
This function is a helper for R users with less Python experience. Currently it is limited to simple MLPs. More advanced networks will require manual creation with reticulate.
Value
No return value.
Compute Concordance of survivalmodel Risk
Description
A thin wrapper around survival::concordance which essentially
just sets reverse = TRUE
.
Usage
cindex(risk, truth, ...)
Arguments
risk |
( |
truth |
( |
... |
( |
Value
The numeric value of the index.
Examples
if (!requireNamespace("survival", quietly = TRUE)) {
set.seed(10)
data <- simsurvdata(20)
fit <- deepsurv(data = data[1:10, ])
p <- predict(fit, type = "risk", newdata = data[11:20, ])
concordance(risk = p, truth = data[11:20, "time"])
}
Cox-Time Survival Neural Network
Description
Cox-Time fits a neural network based on the Cox PH with possibly time-dependent effects.
Usage
coxtime(
formula = NULL,
data = NULL,
reverse = FALSE,
time_variable = "time",
status_variable = "status",
x = NULL,
y = NULL,
frac = 0,
standardize_time = FALSE,
log_duration = FALSE,
with_mean = TRUE,
with_std = TRUE,
activation = "relu",
num_nodes = c(32L, 32L),
batch_norm = TRUE,
dropout = NULL,
device = NULL,
shrink = 0,
early_stopping = FALSE,
best_weights = FALSE,
min_delta = 0,
patience = 10L,
batch_size = 256L,
epochs = 1L,
verbose = FALSE,
num_workers = 0L,
shuffle = TRUE,
...
)
Arguments
formula |
|
data |
|
reverse |
|
time_variable |
|
status_variable |
|
x |
|
y |
|
frac |
|
standardize_time |
|
log_duration |
|
with_mean |
|
with_std |
|
activation |
|
num_nodes , batch_norm , dropout |
|
device |
|
shrink |
|
early_stopping , best_weights , min_delta , patience |
|
batch_size |
|
epochs |
|
verbose |
|
num_workers |
|
shuffle |
|
... |
|
Details
Implemented from the pycox
Python package via reticulate.
Calls pycox.models.Coxtime
.
Value
An object inheriting from class coxtime
.
An object of class survivalmodel
.
References
Kvamme, H., Borgan, Ø., & Scheel, I. (2019). Time-to-event prediction with neural networks and Cox regression. Journal of Machine Learning Research, 20(129), 1–30.
DeepHit Survival Neural Network
Description
DeepHit fits a neural network based on the PMF of a discrete Cox model. This is the single (non-competing) event implementation.
Usage
deephit(
formula = NULL,
data = NULL,
reverse = FALSE,
time_variable = "time",
status_variable = "status",
x = NULL,
y = NULL,
frac = 0,
cuts = 10,
cutpoints = NULL,
scheme = c("equidistant", "quantiles"),
cut_min = 0,
activation = "relu",
custom_net = NULL,
num_nodes = c(32L, 32L),
batch_norm = TRUE,
dropout = NULL,
device = NULL,
mod_alpha = 0.2,
sigma = 0.1,
early_stopping = FALSE,
best_weights = FALSE,
min_delta = 0,
patience = 10L,
batch_size = 256L,
epochs = 1L,
verbose = FALSE,
num_workers = 0L,
shuffle = TRUE,
...
)
Arguments
formula |
|
data |
|
reverse |
|
time_variable |
|
status_variable |
|
x |
|
y |
|
frac |
|
cuts |
|
cutpoints |
|
scheme |
|
cut_min |
|
activation |
|
custom_net |
|
num_nodes , batch_norm , dropout |
|
device |
|
mod_alpha |
|
sigma |
|
early_stopping , best_weights , min_delta , patience |
|
batch_size |
|
epochs |
|
verbose |
|
num_workers |
|
shuffle |
|
... |
|
Details
Implemented from the pycox
Python package via reticulate.
Calls pycox.models.DeepHitSingle
.
Value
An object inheriting from class deephit
.
An object of class survivalmodel
.
References
Changhee Lee, William R Zame, Jinsung Yoon, and Mihaela van der Schaar. Deephit: A deep learning approach to survival analysis with competing risks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. http://medianetlab.ee.ucla.edu/papers/AAAI_2018_DeepHit
DeepSurv Survival Neural Network
Description
DeepSurv neural fits a neural network based on the partial likelihood from a Cox PH.
Usage
deepsurv(
formula = NULL,
data = NULL,
reverse = FALSE,
time_variable = "time",
status_variable = "status",
x = NULL,
y = NULL,
frac = 0,
activation = "relu",
num_nodes = c(32L, 32L),
batch_norm = TRUE,
dropout = NULL,
device = NULL,
early_stopping = FALSE,
best_weights = FALSE,
min_delta = 0,
patience = 10L,
batch_size = 256L,
epochs = 1L,
verbose = FALSE,
num_workers = 0L,
shuffle = TRUE,
...
)
Arguments
formula |
|
data |
|
reverse |
|
time_variable |
|
status_variable |
|
x |
|
y |
|
frac |
|
activation |
|
num_nodes , batch_norm , dropout |
|
device |
|
early_stopping , best_weights , min_delta , patience |
|
batch_size |
|
epochs |
|
verbose |
|
num_workers |
|
shuffle |
|
... |
|
Details
Implemented from the pycox
Python package via reticulate.
Calls pycox.models.CoxPH
.
Value
An object inheriting from class deepsurv
.
An object of class survivalmodel
.
References
Katzman, J. L., Shaham, U., Cloninger, A., Bates, J., Jiang, T., & Kluger, Y. (2018). DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network. BMC Medical Research Methodology, 18(1), 24. https://doi.org/10.1186/s12874-018-0482-1
Get Keras Optimizer
Description
Utility function to construct optimiser from keras, primarily for internal use.
Usage
get_keras_optimizer(
optimizer = "adam",
lr = 0.001,
beta_1 = 0.9,
beta_2 = 0.999,
epsilon = 1e-07,
decay = NULL,
clipnorm = NULL,
clipvalue = NULL,
momentum = 0,
nesterov = FALSE,
rho = 0.95,
global_clipnorm = NULL,
use_ema = FALSE,
ema_momentum = 0.99,
ema_overwrite_frequency = NULL,
jit_compile = TRUE,
initial_accumultator_value = 0.1,
amsgrad = FALSE,
lr_power = -0.5,
l1_regularization_strength = 0,
l2_regularization_strength = 0,
l2_shrinkage_regularization_strength = 0,
beta = 0,
centered = FALSE
)
Arguments
optimizer |
|
lr |
|
beta_1 , beta_2 |
|
epsilon |
|
decay , clipnorm , clipvalue , global_clipnorm |
|
momentum |
|
nesterov |
|
rho |
|
use_ema , jit_compile |
|
ema_momentum , ema_overwrite_frequency |
|
initial_accumultator_value |
|
amsgrad |
|
lr_power , l1_regularization_strength , l2_regularization_strength , l2_shrinkage_regularization_strength , beta |
|
centered |
|
Details
Implemented optimizers are
-
"adadelta"
keras::optimizer_adadelta -
"adagrad"
keras::optimizer_adagrad -
"adam"
keras::optimizer_adam -
"adamax"
keras::optimizer_adamax -
"ftrl"
keras::optimizer_ftrl -
"nadam"
keras::optimizer_nadam -
"rmsprop"
keras::optimizer_rmsprop -
"sgd"
keras::optimizer_sgd
Value
No return value.
Get Pytorch Activation Function
Description
Helper function to return a class or constructed object for
pytorch activation function from torch.nn.modules.activation
.
Usage
get_pycox_activation(
activation = "relu",
construct = TRUE,
alpha = 1,
dim = NULL,
lambd = 0.5,
min_val = -1,
max_val = 1,
negative_slope = 0.01,
num_parameters = 1L,
init = 0.25,
lower = 1/8,
upper = 1/3,
beta = 1,
threshold = 20,
value = 20
)
Arguments
activation |
|
construct |
|
alpha |
|
dim |
|
lambd |
|
min_val , max_val |
|
negative_slope |
|
num_parameters |
|
init |
|
lower , upper |
|
beta |
|
threshold |
|
value |
|
Details
Implemented methods (with help pages) are
-
"celu"
reticulate::py_help(torch$nn$modules$activation$CELU)
-
"elu"
reticulate::py_help(torch$nn$modules$activation$ELU)
-
"gelu"
reticulate::py_help(torch$nn$modules$activation$GELU)
-
"glu"
reticulate::py_help(torch$nn$modules$activation$GLU)
-
"hardshrink"
reticulate::py_help(torch$nn$modules$activation$Hardshrink)
-
"hardsigmoid"
reticulate::py_help(torch$nn$modules$activation$Hardsigmoid)
-
"hardswish"
reticulate::py_help(torch$nn$modules$activation$Hardswish)
-
"hardtanh"
reticulate::py_help(torch$nn$modules$activation$Hardtanh)
-
"relu6"
reticulate::py_help(torch$nn$modules$activation$ReLU6)
-
"leakyrelu"
reticulate::py_help(torch$nn$modules$activation$LeakyReLU)
-
"logsigmoid"
reticulate::py_help(torch$nn$modules$activation$LogSigmoid)
-
"logsoftmax"
reticulate::py_help(torch$nn$modules$activation$LogSoftmax)
-
"prelu"
reticulate::py_help(torch$nn$modules$activation$PReLU)
-
"rrelu"
reticulate::py_help(torch$nn$modules$activation$RReLU)
-
"relu"
reticulate::py_help(torch$nn$modules$activation$ReLU)
-
"selu"
reticulate::py_help(torch$nn$modules$activation$SELU)
-
"sigmoid"
reticulate::py_help(torch$nn$modules$activation$Sigmoid)
-
"softmax"
reticulate::py_help(torch$nn$modules$activation$Softmax)
-
"softmax2d"
reticulate::py_help(torch$nn$modules$activation$Softmax2d)
-
"softmin"
reticulate::py_help(torch$nn$modules$activation$Softmin)
-
"softplus"
reticulate::py_help(torch$nn$modules$activation$Softplus)
-
"softshrink"
reticulate::py_help(torch$nn$modules$activation$Softshrink)
-
"softsign"
reticulate::py_help(torch$nn$modules$activation$Softsign)
-
"tanh"
reticulate::py_help(torch$nn$modules$activation$Tanh)
-
"tanhshrink"
reticulate::py_help(torch$nn$modules$activation$Tanhshrink)
-
"threshold"
reticulate::py_help(torch$nn$modules$activation$Threshold)
Value
No return value.
Get Torchtuples Callbacks
Description
Helper function to return torchtuples callbacks from torchtuples.callbacks
.
Usage
get_pycox_callbacks(
early_stopping = FALSE,
best_weights = FALSE,
min_delta = 0,
patience = 10L
)
Arguments
early_stopping |
|
best_weights |
|
min_delta |
|
patience |
|
Value
No return value.
Get Pytorch Weight Initialization Method
Description
Helper function to return a character string with a populated pytorch weight
initializer method from torch.nn.init
. Used in build_pytorch_net to define a weighting
function.
Usage
get_pycox_init(
init = "uniform",
a = 0,
b = 1,
mean = 0,
std = 1,
val,
gain = 1,
mode = c("fan_in", "fan_out"),
non_linearity = c("leaky_relu", "relu")
)
Arguments
init |
|
a |
|
b |
|
mean , std |
|
val |
|
gain |
|
mode |
|
non_linearity |
|
Details
Implemented methods (with help pages) are
-
"uniform"
reticulate::py_help(torch$nn$init$uniform_)
-
"normal"
reticulate::py_help(torch$nn$init$normal_)
-
"constant"
reticulate::py_help(torch$nn$init$constant_)
-
"xavier_uniform"
reticulate::py_help(torch$nn$init$xavier_uniform_)
-
"xavier_normal"
reticulate::py_help(torch$nn$init$xavier_normal_)
-
"kaiming_uniform"
reticulate::py_help(torch$nn$init$kaiming_uniform_)
-
"kaiming_normal"
reticulate::py_help(torch$nn$init$kaiming_normal_)
-
"orthogonal"
reticulate::py_help(torch$nn$init$orthogonal_)
Value
No return value.
Get Pytorch Optimizer
Description
Helper function to return a constructed pytorch optimizer from torch.optim
.
Usage
get_pycox_optim(
optimizer = "adam",
net,
rho = 0.9,
eps = 1e-08,
lr = 1,
weight_decay = 0,
learning_rate = 0.01,
lr_decay = 0,
betas = c(0.9, 0.999),
amsgrad = FALSE,
lambd = 1e-04,
alpha = 0.75,
t0 = 1e+06,
momentum = 0,
centered = TRUE,
etas = c(0.5, 1.2),
step_sizes = c(1e-06, 50),
dampening = 0,
nesterov = FALSE
)
Arguments
optimizer |
|
net |
|
rho , lr , lr_decay |
|
eps |
|
weight_decay |
|
learning_rate |
|
betas |
|
amsgrad |
|
lambd , t0 |
|
alpha |
|
momentum |
|
centered |
|
etas , step_sizes |
|
dampening |
|
nesterov |
|
Details
Implemented methods (with help pages) are
-
"adadelta"
reticulate::py_help(torch$optim$Adadelta)
-
"adagrad"
reticulate::py_help(torch$optim$Adagrad)
-
"adam"
reticulate::py_help(torch$optim$Adam)
-
"adamax"
reticulate::py_help(torch$optim$Adamax)
-
"adamw"
reticulate::py_help(torch$optim$AdamW)
-
"asgd"
reticulate::py_help(torch$optim$ASGD)
-
"rmsprop"
reticulate::py_help(torch$optim$RMSprop)
-
"rprop"
reticulate::py_help(torch$optim$Rprop)
-
"sgd"
reticulate::py_help(torch$optim$SGD)
-
"sparse_adam"
reticulate::py_help(torch$optim$SparseAdam)
Value
No return value.
Install Keras and Tensorflow
Description
Stripped back version of keras::install_keras. Note the
default for pip
is changed to TRUE
.
Usage
install_keras(
method = "auto",
conda = "auto",
pip = TRUE,
install_tensorflow = FALSE,
...
)
Arguments
method , conda , pip |
|
install_tensorflow |
If |
... |
Passed to reticulate::py_install. |
Value
No return value.
Install Pycox With Reticulate
Description
Installs the python 'pycox' package via reticulate.
Note the default for pip
is changed to TRUE
.
Usage
install_pycox(
method = "auto",
conda = "auto",
pip = TRUE,
install_torch = FALSE,
...
)
Arguments
method , conda , pip |
|
install_torch |
If |
... |
Passed to reticulate::py_install. |
Value
No return value.
Install Torch With Reticulate
Description
Installs the python 'torch' package via reticulate. Note the
default for pip
is changed to TRUE
.
Usage
install_torch(method = "auto", conda = "auto", pip = TRUE)
Arguments
method , conda , pip |
Value
No return value.
Logistic-Hazard Survival Neural Network
Description
Logistic-Hazard fits a discrete neural network based on a cross-entropy loss and predictions of a discrete hazard function, also known as Nnet-Survival.
Usage
loghaz(
formula = NULL,
data = NULL,
reverse = FALSE,
time_variable = "time",
status_variable = "status",
x = NULL,
y = NULL,
frac = 0,
cuts = 10,
cutpoints = NULL,
scheme = c("equidistant", "quantiles"),
cut_min = 0,
activation = "relu",
custom_net = NULL,
num_nodes = c(32L, 32L),
batch_norm = TRUE,
dropout = NULL,
device = NULL,
early_stopping = FALSE,
best_weights = FALSE,
min_delta = 0,
patience = 10L,
batch_size = 256L,
epochs = 1L,
verbose = FALSE,
num_workers = 0L,
shuffle = TRUE,
...
)
Arguments
formula |
|
data |
|
reverse |
|
time_variable |
|
status_variable |
|
x |
|
y |
|
frac |
|
cuts |
|
cutpoints |
|
scheme |
|
cut_min |
|
activation |
|
custom_net |
|
num_nodes , batch_norm , dropout |
|
device |
|
early_stopping , best_weights , min_delta , patience |
|
batch_size |
|
epochs |
|
verbose |
|
num_workers |
|
shuffle |
|
... |
|
Details
Implemented from the pycox
Python package via reticulate.
Calls pycox.models.LogisticHazard
.
Value
An object inheriting from class loghaz
.
An object of class survivalmodel
.
References
Gensheimer, M. F., & Narasimhan, B. (2018). A Simple Discrete-Time Survival Model for Neural Networks, 1–17. https://doi.org/arXiv:1805.00917v3
Kvamme, H., & Borgan, Ø. (2019). Continuous and discrete-time survival prediction with neural networks. https://doi.org/arXiv:1910.06724.
PC-Hazard Survival Neural Network
Description
Logistic-Hazard fits a discrete neural network based on a cross-entropy loss and predictions of a discrete hazard function, also known as Nnet-Survival.
Usage
pchazard(
formula = NULL,
data = NULL,
reverse = FALSE,
time_variable = "time",
status_variable = "status",
x = NULL,
y = NULL,
frac = 0,
cuts = 10,
cutpoints = NULL,
scheme = c("equidistant", "quantiles"),
cut_min = 0,
activation = "relu",
custom_net = NULL,
num_nodes = c(32L, 32L),
batch_norm = TRUE,
reduction = c("mean", "none", "sum"),
dropout = NULL,
device = NULL,
early_stopping = FALSE,
best_weights = FALSE,
min_delta = 0,
patience = 10L,
batch_size = 256L,
epochs = 1L,
verbose = FALSE,
num_workers = 0L,
shuffle = TRUE,
...
)
Arguments
formula |
|
data |
|
reverse |
|
time_variable |
|
status_variable |
|
x |
|
y |
|
frac |
|
cuts |
|
cutpoints |
|
scheme |
|
cut_min |
|
activation |
|
custom_net |
|
num_nodes , batch_norm , dropout |
|
reduction |
|
device |
|
early_stopping , best_weights , min_delta , patience |
|
batch_size |
|
epochs |
|
verbose |
|
num_workers |
|
shuffle |
|
... |
|
Details
Implemented from the pycox
Python package via reticulate.
Calls pycox.models.PCHazard
.
Value
An object inheriting from class pchazard
.
An object of class survivalmodel
.
References
Kvamme, H., & Borgan, Ø. (2019). Continuous and discrete-time survival prediction with neural networks. https://doi.org/arXiv:1910.06724.
Predict Method for pycox Neural Networks
Description
Predicted values from a fitted pycox ANN.
Usage
## S3 method for class 'pycox'
predict(
object,
newdata,
batch_size = 256L,
num_workers = 0L,
interpolate = FALSE,
inter_scheme = c("const_hazard", "const_pdf"),
sub = 10L,
type = c("survival", "risk", "all"),
...
)
Arguments
object |
|
newdata |
|
batch_size |
|
num_workers |
|
interpolate |
|
inter_scheme |
|
sub |
|
type |
( |
... |
|
Value
A numeric
if type = "risk"
, a matrix
if type = "survival"
where
entries are survival probabilities with rows of observations and columns are time-points.
Prepare Data for Pycox Model Training
Description
Utility function to prepare data for training in a Pycox model. Generally used internally only.
Usage
pycox_prepare_train_data(
x_train,
y_train,
frac = 0,
standardize_time = FALSE,
log_duration = FALSE,
with_mean = TRUE,
with_std = TRUE,
discretise = FALSE,
cuts = 10L,
cutpoints = NULL,
scheme = c("equidistant", "quantiles"),
cut_min = 0L,
model = c("coxtime", "deepsurv", "deephit", "loghaz", "pchazard")
)
Arguments
x_train |
|
y_train |
|
frac |
|
standardize_time |
|
log_duration |
|
with_mean |
|
with_std |
|
discretise |
|
cuts |
|
cutpoints |
|
scheme |
|
cut_min |
|
model |
|
Value
No return value.
Vectorised Logical requireNamespace
Description
Helper function for internal use. Vectorises the requireNamespace function and
returns TRUE
if all packages, x
, are available and FALSE
otherwise.
Usage
requireNamespaces(x)
Arguments
x |
|
Value
No return value.
Set seed in R numpy and torch
Description
To ensure consistent results, a seed has to be set in R
using set.seed
as usual but also in numpy
and torch
via reticulate
.
Therefore this function simplifies the process into one funciton.
Usage
set_seed(seed_R, seed_np = seed_R, seed_torch = seed_R)
Arguments
seed_R |
( |
seed_np |
( |
seed_torch |
( |
Value
No return value.
Simulate Survival Data
Description
Function for simulating survival data.
Usage
simsurvdata(n = 100, trt = 2, age = 2, sex = 1.5, cens = 0.3)
Arguments
n |
|
trt , age , sex |
|
cens |
|
Details
Currently limited to three covariates, Weibull survival times, and Type I censoring. This will be expanded to a flexible simulation function in future updates. For now the function is primarily limited to helping function examples.
Value
Examples
simsurvdata()
Safely convert a survival matrix prediction to a relative risk
Description
Many methods can be used to reduce a discrete survival distribution prediction (i.e. matrix) to a relative risk / ranking prediction. Here we define the predicted relative risk as the sum of the predicted cumulative hazard function - which can be loosely interpreted as the expected number of deaths for patients with similar characteristics.
Usage
surv_to_risk(x)
Arguments
x |
( |
Value
A numeric vector with the expected number of deaths.
References
Sonabend, R., Bender, A., & Vollmer, S. (2021). Evaluation of survival distribution predictions with discrimination measures. http://arxiv.org/abs/2112.04828.