Type: Package
Title: Utility-Based Optimal Phase II/III Drug Development Planning
Version: 1.0.2
Author: Stella Erdmann [aut], Johannes Cepicka [aut], Marietta Kirchner [aut], Meinhard Kieser [aut], Lukas D. Sauer ORCID iD [aut, cre]
Maintainer: Lukas D. Sauer <sauer@imbi.uni-heidelberg.de>
Description: Plan optimal sample size allocation and go/no-go decision rules for phase II/III drug development programs with time-to-event, binary or normally distributed endpoints when assuming fixed treatment effects or a prior distribution for the treatment effect, using methods from Kirchner et al. (2016) <doi:10.1002/sim.6624> and Preussler (2020). Optimal is in the sense of maximal expected utility, where the utility is a function taking into account the expected cost and benefit of the program. It is possible to extend to more complex settings with bias correction (Preussler S et al. (2020) <doi:10.1186/s12874-020-01093-w>), multiple phase III trials (Preussler et al. (2019) <doi:10.1002/bimj.201700241>), multi-arm trials (Preussler et al. (2019) <doi:10.1080/19466315.2019.1702092>), and multiple endpoints (Kieser et al. (2018) <doi:10.1002/pst.1861>).
License: MIT + file LICENSE
Encoding: UTF-8
RoxygenNote: 7.3.2
Depends: R (≥ 3.5.0), doParallel, parallel, foreach, iterators
Imports: mvtnorm, cubature, msm, MASS, stats, progressr
URL: https://github.com/Sterniii3/drugdevelopR, https://sterniii3.github.io/drugdevelopR/
BugReports: https://github.com/Sterniii3/drugdevelopR/issues
Suggests: rmarkdown, knitr, testthat (≥ 3.0.0), covr, kableExtra, magrittr, devtools
VignetteBuilder: knitr
Config/testthat/parallel: true
Config/testthat/edition: 3
NeedsCompilation: no
Packaged: 2025-01-14 08:52:18 UTC; or415
Repository: CRAN
Date/Publication: 2025-01-14 12:50:01 UTC

drugdevelopR: Utility-Based Optimal Phase II/III Drug Development Planning

Description

Plan optimal sample size allocation and go/no-go decision rules for phase II/III drug development programs with time-to-event, binary or normally distributed endpoints when assuming fixed treatment effects or a prior distribution for the treatment effect, using methods from Kirchner et al. (2016) doi:10.1002/sim.6624 and Preussler (2020). Optimal is in the sense of maximal expected utility, where the utility is a function taking into account the expected cost and benefit of the program. It is possible to extend to more complex settings with bias correction (Preussler S et al. (2020) doi:10.1186/s12874-020-01093-w), multiple phase III trials (Preussler et al. (2019) doi:10.1002/bimj.201700241), multi-arm trials (Preussler et al. (2019) doi:10.1080/19466315.2019.1702092), and multiple endpoints (Kieser et al. (2018) doi:10.1002/pst.1861).

Author(s)

Maintainer: Lukas D. Sauer sauer@imbi.uni-heidelberg.de (ORCID)

Authors:

See Also

Useful links:


Expected probability of a successful program deciding between two or three phase III trials in a time-to-event setting

Description

The function EPsProg23() calculates the expected probability of a successful program in a time-to-event setting. This function follows a special decision rule in order to determine whether two or three phase III trials should be conducted. First, two phase III trials are performed. Depending on their success, the decision for a third phase III trial is made:

Usage

EPsProg23(HRgo, d2, alpha, beta, w, hr1, hr2, id1, id2, case, size, ymin)

Arguments

HRgo

threshold value for the go/no-go decision rule

d2

total number of events in phase II

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

size

size category "small", "medium" or "large"

ymin

assumed minimal clinical relevant effect

Value

The output of the function EPsProg23() is the expected probability of a successful program.

Examples

res <- EPsProg23(HRgo = 0.8, d2 = 50,  alpha = 0.025, beta = 0.1, 
                                  w = 0.3, hr1 =  0.69, hr2 = 0.81, 
                                  id1 = 280, id2 = 420, case = 2, size = "small",
                                  ymin = 0.5)

Expected probability of a successful program deciding between two or three phase III trials for a binary distributed outcome

Description

The function EPsProg23_binary() calculates the expected probability of a successful program with a normally distributed outcome. This function follows a special decision rule in order to determine whether two or three phase III trials should be conducted. First, two phase III trials are performed. Depending on their success, the decision for a third phase III trial is made:

Usage

EPsProg23_binary(
  RRgo,
  n2,
  alpha,
  beta,
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  case,
  size,
  ymin
)

Arguments

RRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

w

weight for mixture prior distribution

p0

assumed true rate of control group

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

in1

amount of information for p11 in terms of sample size

in2

amount of information for p12 in terms of sample size

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

size

size category "small", "medium" or "large"

ymin

assumed minimal clinical relevant effect

Value

The output of the function EPsProg23_binary() is the expected probability of a successful program.

Examples

res <- EPsProg23_binary(RRgo = 0.8, n2 = 50,  alpha = 0.025, beta = 0.1, 
                                 w = 0.6,  p0 = 0.3, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, case = 2, size = "small",
                                 ymin = 0.5)

Expected probability of a successful program deciding between two or three phase III trials for a normally distributed outcome

Description

The function EPsProg23_normal() calculates the expected probability of a successful program with a normally distributed outcome. This function follows a special decision rule in order to determine whether two or three phase III trials should be conducted. First, two phase III trials are performed. Depending on their success, the decision for a third phase III trial is made:

Usage

EPsProg23_normal(
  kappa,
  n2,
  alpha,
  beta,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  case,
  size,
  ymin
)

Arguments

kappa

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be an even number

alpha

significance level

beta

type II error rate; this means that 1 - beta is the power for calculating the sample size for phase III

w

weight for the mixture prior distribution

Delta1

assumed true treatment effect for the standardized difference in means

Delta2

assumed true treatment effect for the standardized difference in means

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

a

lower boundary for the truncation

b

upper boundary for the truncation

case

number of significant trials needed for approval; possible values are 2 and 3 for this function

size

effect size category; possible values are "small", "medium", "large" and "all"

ymin

assumed minimal clinical relevant effect

Value

The output of the function EPsProg23_normal() is the expected probability of a successful program.

Examples

EPsProg23_normal(kappa = 0.1, n2 = 50, alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 case = 2, size = "small", ymin = 0.5)

Expected probability of a successful program for bias adjustment programs with time-to-event outcomes

Description

To discount for overoptimistic results in phase II when calculating the optimal sample size in phase III, it is necessary to use the following functions, which each describe a specific case:

Usage

EPsProg_L(
  HRgo,
  d2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  w,
  hr1,
  hr2,
  id1,
  id2,
  fixed
)

EPsProg_L2(
  HRgo,
  d2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  w,
  hr1,
  hr2,
  id1,
  id2,
  fixed
)

EPsProg_R(
  HRgo,
  d2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  w,
  hr1,
  hr2,
  id1,
  id2,
  fixed
)

EPsProg_R2(
  HRgo,
  d2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  w,
  hr1,
  hr2,
  id1,
  id2,
  fixed
)

Arguments

HRgo

threshold value for the go/no-go decision rule

d2

total events for phase II; must be even number

Adj

adjustment parameter

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

step1

lower boundary for effect size

step2

upper boundary for effect size

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

Value

The output of the functions EPsProg_L(), EPsProg_L2(), EPsProg_R() and EPsProg_R2() is the expected probability of a successful program.

Examples

res <- EPsProg_L(HRgo = 0.8, d2 = 50, Adj = 0.4, 
                           alpha = 0.025, beta = 0.1, 
                           step1 = 1, step2 = 0.95, 
                           w = 0.3, hr1 = 0.69, hr2 = 0.81,
                           id1 = 280, id2 = 420, fixed = FALSE)
          res <- EPsProg_L2(HRgo = 0.8, d2 = 50, Adj = 0.4, 
                           alpha = 0.025, beta = 0.1, 
                           step1 = 1, step2 = 0.95, 
                           w = 0.3, hr1 = 0.69, hr2 = 0.81,
                           id1 = 280, id2 = 420, fixed = FALSE)
          res <- EPsProg_R(HRgo = 0.8, d2 = 50, Adj = 0.9, 
                           alpha = 0.025, beta = 0.1, 
                           step1 = 1, step2 = 0.95, 
                           w = 0.3, hr1 = 0.69, hr2 = 0.81,
                           id1 = 280, id2 = 420, fixed = FALSE)
          res <- EPsProg_R2(HRgo = 0.8, d2 = 50, Adj = 0.9, 
                           alpha = 0.025, beta = 0.1, 
                           step1 = 1, step2 = 0.95, 
                           w = 0.3, hr1 = 0.69, hr2 = 0.81,
                           id1 = 280, id2 = 420, fixed = FALSE)

Expected probability of a successful program for bias adjustment programs with binary distributed outcomes

Description

To discount for overoptimistic results in phase II when calculating the optimal sample size in phase III, it is necessary to use the following functions, which each describe a specific case:

Usage

EPsProg_binary_L(
  RRgo,
  n2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  p0,
  w,
  p11,
  p12,
  in1,
  in2,
  fixed
)

EPsProg_binary_L2(
  RRgo,
  n2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  p0,
  w,
  p11,
  p12,
  in1,
  in2,
  fixed
)

EPsProg_binary_R(
  RRgo,
  n2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  p0,
  w,
  p11,
  p12,
  in1,
  in2,
  fixed
)

EPsProg_binary_R2(
  RRgo,
  n2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  p0,
  w,
  p11,
  p12,
  in1,
  in2,
  fixed
)

Arguments

RRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

Adj

adjustment parameter

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

step1

lower boundary for effect size

step2

upper boundary for effect size

p0

assumed true rate of control group

w

weight for mixture prior distribution

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

in1

amount of information for p11 in terms of sample size

in2

amount of information for p12 in terms of sample size

fixed

choose if true treatment effects are fixed or random, if TRUE p11 is used as fixed effect

Value

The output of the functions EPsProg_binary_L(), EPsProg_binary_L2(), EPsProg_binary_R() and EPsProg_binary_R2() is the expected probability of a successful program.

Examples

res <- EPsProg_binary_L(RRgo = 0.8, n2 = 50, Adj = 0, 
                                 alpha = 0.025, beta = 0.1, 
                                 step1 = 1, step2 = 0.95, p0 = 0.6,  w = 0.3,
                                 p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                                 fixed = FALSE)
          res <- EPsProg_binary_L2(RRgo = 0.8, n2 = 50, Adj = 0, 
                                 alpha = 0.025, beta = 0.1, 
                                 step1 = 1, step2 = 0.95, p0 = 0.6,  w = 0.3,
                                 p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                                 fixed = FALSE)
          res <- EPsProg_binary_R(RRgo = 0.8, n2 = 50, Adj = 1, 
                                 alpha = 0.025, beta = 0.1, 
                                 step1 = 1, step2 = 0.95, p0 = 0.6,  w = 0.3,
                                 p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                                 fixed = FALSE)
          res <- EPsProg_binary_R2(RRgo = 0.8, n2 = 50, Adj = 1, 
                                 alpha = 0.025, beta = 0.1, 
                                 step1 = 1, step2 = 0.95, p0 = 0.6,  w = 0.3,
                                 p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                                 fixed = FALSE)

Expected probability of a successful program for bias adjustment programs with normally distributed outcomes

Description

To discount for overoptimistic results in phase II when calculating the optimal sample size in phase III, it is necessary to use the following functions, which each describe a specific case:

Usage

EPsProg_normal_L(
  kappa,
  n2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  fixed
)

EPsProg_normal_L2(
  kappa,
  n2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  fixed
)

EPsProg_normal_R(
  kappa,
  n2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  fixed
)

EPsProg_normal_R2(
  kappa,
  n2,
  Adj,
  alpha,
  beta,
  step1,
  step2,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  fixed
)

Arguments

kappa

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

Adj

adjustment parameter

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

step1

lower boundary for effect size

step2

upper boundary for effect size

w

weight for mixture prior distribution

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

a

lower boundary for the truncation

b

upper boundary for the truncation

fixed

choose if true treatment effects are fixed or random, if TRUE Delta1 is used as fixed effect

Value

The output of the functions EPsProg_normal_L(), EPsProg_normal_L2(), EPsProg_normal_R() and EPsProg_normal_R2() is the expected probability of a successful program.

Examples

res <- EPsProg_normal_L(kappa = 0.1, n2 = 50, Adj = 0, 
                                 alpha = 0.025, beta = 0.1, w = 0.3,
                                 step1 = 0, step2 = 0.5,
                                 Delta1 = 0.375, Delta2 = 0.625, 
                                 in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, fixed = FALSE)
          res <- EPsProg_normal_L2(kappa = 0.1, n2 = 50, Adj = 0, 
                                 alpha = 0.025, beta = 0.1, w = 0.3,
                                 step1 = 0, step2 = 0.5,
                                 Delta1 = 0.375, Delta2 = 0.625, 
                                 in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, fixed = FALSE)
          res <- EPsProg_normal_R(kappa = 0.1, n2 = 50, Adj = 1, 
                                 alpha = 0.025, beta = 0.1, w = 0.3,
                                 step1 = 0, step2 = 0.5,
                                 Delta1 = 0.375, Delta2 = 0.625, 
                                 in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, fixed = FALSE)
          res <- EPsProg_normal_R2(kappa = 0.1, n2 = 50, Adj = 1, 
                                 alpha = 0.025, beta = 0.1, w = 0.3,
                                 step1 = 0, step2 = 0.5,
                                 Delta1 = 0.375, Delta2 = 0.625, 
                                 in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, fixed = FALSE)

Expected probability of a successful program for multiple endpoints and normally distributed outcomes

Description

This function calculates the probability that our drug development program is successful. Successful is defined as both endpoints showing a statistically significant positive treatment effect in phase III.

Usage

EPsProg_multiple_normal(
  kappa,
  n2,
  alpha,
  beta,
  Delta1,
  Delta2,
  sigma1,
  sigma2,
  step11,
  step12,
  step21,
  step22,
  in1,
  in2,
  fixed,
  rho,
  rsamp
)

Arguments

kappa

threshold value for the go/no-go decision rule;

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

Delta1

assumed true treatment effect given as difference in means for endpoint 1

Delta2

assumed true treatment effect given as difference in means for endpoint 2

sigma1

standard deviation of first endpoint

sigma2

standard deviation of second endpoint

step11

lower boundary for effect size for first endpoint

step12

lower boundary for effect size for second endpoint

step21

upper boundary for effect size for first endpoint

step22

upper boundary for effect size for second endpoint

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

fixed

choose if true treatment effects are fixed or random, if TRUE then Delta1 is used as fixed effect

rho

correlation between the two endpoints

rsamp

sample data set for Monte Carlo integration

Value

The output of the function EPsProg_multiple_normal() is the expected probability of a successfull program, when going to phase III.


Expected probability of a successful program for multiple endpoints in a time-to-event setting

Description

This function calculates the probability that our drug development program is successful. Successful is defined as at least one endpoint showing a statistically significant positive treatment effect in phase III.

Usage

EPsProg_multiple_tte(
  HRgo,
  n2,
  alpha,
  beta,
  ec,
  hr1,
  hr2,
  id1,
  id2,
  step1,
  step2,
  fixed,
  rho,
  rsamp
)

Arguments

HRgo

threshold value for the go/no-go decision rule;

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

ec

control arm event rate for phase II and III

hr1

assumed true treatment effect on HR scale for endpoint OS

hr2

assumed true treatment effect on HR scale for endpoint PFS

id1

amount of information for hr1 in terms of sample size

id2

amount of information for hr2 in terms of sample size

step1

lower boundary for effect size

step2

upper boundary for effect size

fixed

choose if true treatment effects are fixed or random

rho

correlation between the two endpoints

rsamp

sample data set for Monte Carlo integration

Value

The output of the function EPsProg_multiple_tte() is the expected probability of a successful program, when going to phase III.


Expected probability of a successful program for multitrial programs in a time-to-event setting

Description

These functions calculate the expected probability of a successful program given the parameters. Each function represents a specific strategy, e.g. the function EpsProg3() calculates the expected probability if three phase III trials are performed. The parameter case specifies how many of the trials have to be successful, i.e. how many trials show a significantly relevant positive treatment effect.

Usage

EPsProg2(HRgo, d2, alpha, beta, w, hr1, hr2, id1, id2, case, size, fixed)

EPsProg3(HRgo, d2, alpha, beta, w, hr1, hr2, id1, id2, case, size, fixed)

EPsProg4(HRgo, d2, alpha, beta, w, hr1, hr2, id1, id2, case, size, fixed)

Arguments

HRgo

threshold value for the go/no-go decision rule

d2

total number of events in phase II

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

size

size category "small", "medium" or "large"

fixed

choose if true treatment effects are fixed or random

Details

The following cases can be investigated by the software:

Value

The output of the function EPsProg2(), EPsProg3() and EPsProg4() is the expected probability of a successful program when performing several phase III trials (2, 3 or 4 respectively)

Examples

EPsProg2(HRgo = 0.8, d2 = 50,  alpha = 0.025, beta = 0.1, 
                                 w = 0.3, hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 210, id2 = 420, case = 2, size = "small",
                                 fixed = FALSE)
          EPsProg3(HRgo = 0.8, d2 = 50,  alpha = 0.025, beta = 0.1, 
                                 w = 0.3, hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 210, id2 = 420, case = 2, size = "small",
                                 fixed = TRUE)
          EPsProg4(HRgo = 0.8, d2 = 50,  alpha = 0.025, beta = 0.1, 
                                 w = 0.3, hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 210, id2 = 420, case = 3, size = "small",
                                 fixed = TRUE)

Expected probability of a successful program for multitrial programs with binary distributed outcomes

Description

These functions calculate the expected probability of a successful program given the parameters. Each function represents a specific strategy, e.g. the function EpsProg3_binary() calculates the expected probability if three phase III trials are performed. The parameter case specifies how many of the trials have to be successful, i.e. how many trials show a significantly relevant positive treatment effect.

Usage

EPsProg2_binary(
  RRgo,
  n2,
  alpha,
  beta,
  p0,
  w,
  p11,
  p12,
  in1,
  in2,
  case,
  size,
  fixed
)

EPsProg3_binary(
  RRgo,
  n2,
  alpha,
  beta,
  p0,
  w,
  p11,
  p12,
  in1,
  in2,
  case,
  size,
  fixed
)

EPsProg4_binary(
  RRgo,
  n2,
  alpha,
  beta,
  p0,
  w,
  p11,
  p12,
  in1,
  in2,
  case,
  size,
  fixed
)

Arguments

RRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

p0

assumed true rate of control group

w

weight for mixture prior distribution

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

in1

amount of information for p11 in terms of sample size

in2

amount of information for p12 in terms of sample size

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

size

size category "small", "medium" or "large"

fixed

choose if true treatment effects are fixed or random

Details

The following cases can be investigated by the software:

Value

The output of the function EPsProg2_binary(), EPsProg3_binary() and EPsProg4_binary() is the expected probability of a successful program when performing several phase III trials (2, 3 or 4 respectively)

Examples

EPsProg2_binary(RRgo = 0.8, n2 = 50,  alpha = 0.025, beta = 0.1, 
                                 p0 = 0.6,  w = 0.3, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, case = 2, size = "small",
                                 fixed = FALSE)
          EPsProg3_binary(RRgo = 0.8, n2 = 50,  alpha = 0.025, beta = 0.1, 
                                 p0 = 0.6,  w = 0.3, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, case = 2, size = "small",
                                 fixed = FALSE)
          EPsProg4_binary(RRgo = 0.8, n2 = 50,  alpha = 0.025, beta = 0.1, 
                                 p0 = 0.6,  w = 0.3, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, case = 3, size = "small",
                                 fixed = FALSE)

Expected probability of a successful program for multitrial programs with normally distributed outcomes

Description

These functions calculate the expected probability of a successful program given the parameters. Each function represents a specific strategy, e.g. the function EpsProg3_normal() calculates the expected probability if three phase III trials are performed. The parameter case specifies how many of the trials have to be successful, i.e. how many trials show a significantly relevant positive treatment effect.

Usage

EPsProg2_normal(
  kappa,
  n2,
  alpha,
  beta,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  case,
  size,
  fixed
)

EPsProg3_normal(
  kappa,
  n2,
  alpha,
  beta,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  case,
  size,
  fixed
)

EPsProg4_normal(
  kappa,
  n2,
  alpha,
  beta,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  case,
  size,
  fixed
)

Arguments

kappa

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

w

weight for mixture prior distribution

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

a

lower boundary for the truncation

b

upper boundary for the truncation

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

size

size category "small", "medium" or "large"

fixed

choose if true treatment effects are fixed or random

Details

The following cases can be investigated by the software:

Value

The output of the function EPsProg2_normal(), EPsProg3_normal() and EPsProg4_normal() is the expected probability of a successful program when performing several phase III trials (2, 3 or 4 respectively).

Examples

EPsProg2_normal(kappa = 0.1, n2 = 50, alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 case = 2, size = "small", fixed = FALSE)
          EPsProg3_normal(kappa = 0.1, n2 = 50, alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 case = 2, size = "small", fixed = TRUE)
          EPsProg4_normal(kappa = 0.1, n2 = 50, alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 case = 3, size = "small", fixed = TRUE)                      

Expected probability of a successful program for time-to-event outcomes

Description

Expected probability of a successful program for time-to-event outcomes

Usage

EPsProg_tte(
  HRgo,
  d2,
  alpha,
  beta,
  step1,
  step2,
  w,
  hr1,
  hr2,
  id1,
  id2,
  gamma,
  fixed
)

Arguments

HRgo

threshold value for the go/no-go decision rule

d2

total events for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

step1

lower boundary for effect size

step2

upper boundary for effect size

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

gamma

difference in treatment effect due to different population structures in phase II and III

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

Value

The output of the functions EPsProg_tte() is the expected probability of a successful program.

Examples

res <- EPsProg_tte(HRgo = 0.8, d2 = 50, 
                           alpha = 0.025, beta = 0.1, 
                           step1 = 1, step2 = 0.95, 
                           w = 0.3, hr1 = 0.69, hr2 = 0.81,
                           id1 = 280, id2 = 420,
                           gamma = 0, fixed = FALSE)

Expected sample size for phase III for bias adjustment programs and time-to-event outcomes

Description

To discount for overoptimistic results in phase II when calculating the optimal sample size in phase III, it is necessary to use the functions Ed3_L(), Ed3_L2(), Ed3_R() and Ed3_R2(). Each function describes a specific case:

Usage

Ed3_L(HRgo, d2, Adj, alpha, beta, w, hr1, hr2, id1, id2, fixed)

Ed3_L2(HRgo, d2, Adj, alpha, beta, w, hr1, hr2, id1, id2, fixed)

Ed3_R(HRgo, d2, Adj, alpha, beta, w, hr1, hr2, id1, id2, fixed)

Ed3_R2(HRgo, d2, Adj, alpha, beta, w, hr1, hr2, id1, id2, fixed)

Arguments

HRgo

threshold value for the go/no-go decision rule

d2

total events for phase II; must be even number

Adj

adjustment parameter

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

Value

The output of the functions Ed3_L, Ed3_L2, Ed3_R and Ed3_R2 is the expected number of participants in phase III.

Examples

res <-  Ed3_L(HRgo = 0.8, d2 = 50, Adj = 0.4,
                        alpha = 0.025, beta = 0.1, w = 0.3, 
                        hr1 =  0.69, hr2 = 0.81, 
                        id1 = 280, id2 = 420, fixed = FALSE)
          res <-  Ed3_L2(HRgo = 0.8, d2 = 50, Adj = 0.4,
                        alpha = 0.025, beta = 0.1, w = 0.3, 
                        hr1 =  0.69, hr2 = 0.81, 
                        id1 = 280, id2 = 420, fixed = FALSE)
          res <- Ed3_R(HRgo = 0.8, d2 = 50, Adj = 0.9,
                        alpha = 0.025, beta = 0.1, w = 0.3, 
                        hr1 =  0.69, hr2 = 0.81, 
                        id1 = 280, id2 = 420, fixed = FALSE)
          res <- Ed3_R2(HRgo = 0.8, d2 = 50, Adj = 0.9,
                        alpha = 0.025, beta = 0.1, w = 0.3, 
                        hr1 =  0.69, hr2 = 0.81, 
                        id1 = 280, id2 = 420, fixed = FALSE)
                              

Expected sample size for phase III for time-to-event outcomes

Description

Expected sample size for phase III for time-to-event outcomes

Usage

Ed3_tte(HRgo, d2, alpha, beta, w, hr1, hr2, id1, id2, fixed)

Arguments

HRgo

threshold value for the go/no-go decision rule

d2

total events for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

Value

The output of the the functions Ed3_tte is the expected number of events in phase III.

Examples

res <-  Ed3_tte(HRgo = 0.8, d2 = 50,
                        alpha = 0.025, beta = 0.1, w = 0.3, 
                        hr1 =  0.69, hr2 = 0.81, 
                        id1 = 280, id2 = 420, fixed = FALSE)

Expected sample size for phase III for bias adjustment programs and binary distributed outcomes

Description

To discount for overoptimistic results in phase II when calculating the optimal sample size in phase III, it is necessary to use the functions En3_binary_L(), En3_binary_L2(), En3_binary_R() and En3_binary_R2(). Each function describes a specific case:

Usage

En3_binary_L(RRgo, n2, Adj, alpha, beta, p0, w, p11, p12, in1, in2, fixed)

En3_binary_L2(RRgo, n2, Adj, alpha, beta, p0, w, p11, p12, in1, in2, fixed)

En3_binary_R(RRgo, n2, Adj, alpha, beta, p0, w, p11, p12, in1, in2, fixed)

En3_binary_R2(RRgo, n2, Adj, alpha, beta, p0, w, p11, p12, in1, in2, fixed)

Arguments

RRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

Adj

adjustment parameter

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

p0

assumed true rate of control group

w

weight for mixture prior distribution

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

in1

amount of information for p11 in terms of sample size

in2

amount of information for p12 in terms of sample size

fixed

choose if true treatment effects are fixed or random, if TRUE p11 is used as fixed effect

Value

The output of the functions En3_binary_L, En3_binary_L2, En3_binary_R and En3_binary_R2 is the expected number of participants in phase III.

Examples

res <- En3_binary_L(RRgo = 0.8, n2 = 50, Adj = 0, 
                              alpha = 0.025, beta = 0.1, p0 = 0.6,  w = 0.3,
                              p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                              fixed = FALSE)
          res <-  En3_binary_L2(RRgo = 0.8, n2 = 50, Adj = 0, 
                              alpha = 0.025, beta = 0.1, p0 = 0.6,  w = 0.3,
                              p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                              fixed = FALSE)
          res <- En3_binary_R(RRgo = 0.8, n2 = 50, Adj = 1, 
                              alpha = 0.025, beta = 0.1, p0 = 0.6,  w = 0.3,
                              p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                              fixed = FALSE)
          res <- En3_binary_R2(RRgo = 0.8, n2 = 50, Adj = 1, 
                              alpha = 0.025, beta = 0.1, p0 = 0.6,  w = 0.3,
                              p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                              fixed = FALSE)
                              

Expected sample size for phase III for bias adjustment programs and normally distributed outcomes

Description

To discount for overoptimistic results in phase II when calculating the optimal sample size in phase III, it is necessary to use the functions En3_normal_L(), En3_normal_L2(), En3_normal_R() and En3_normal_R2(). Each function describes a specific case:

Usage

En3_normal_L(
  kappa,
  n2,
  Adj,
  alpha,
  beta,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  fixed
)

En3_normal_L2(
  kappa,
  n2,
  Adj,
  alpha,
  beta,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  fixed
)

En3_normal_R(
  kappa,
  n2,
  Adj,
  alpha,
  beta,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  fixed
)

En3_normal_R2(
  kappa,
  n2,
  Adj,
  alpha,
  beta,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  fixed
)

Arguments

kappa

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

Adj

adjustment parameter

alpha

significance level

beta

1 - beta is the power for calculation of sample size for phase III

w

weight for mixture prior distribution

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

a

lower boundary for the truncation

b

upper boundary for the truncation

fixed

choose if true treatment effects are fixed or random, if TRUE Delta1 is used as fixed effect

Value

The output of the functions En3_normal_L, En3_normal_L2, En3_normal_R and En3_normal_R2 is the expected number of participants in phase III.

Examples

res <- En3_normal_L(kappa = 0.1, n2 = 50, Adj = 0, 
                              alpha = 0.025, beta = 0.1, w = 0.3,
                              Delta1 = 0.375, Delta2 = 0.625, 
                              in1 = 300, in2 = 600, 
                              a = 0.25, b = 0.75, fixed = FALSE)
          res <- En3_normal_L2(kappa = 0.1, n2 = 50, Adj = 0, 
                              alpha = 0.025, beta = 0.1, w = 0.3,
                              Delta1 = 0.375, Delta2 = 0.625, 
                              in1 = 300, in2 = 600, 
                              a = 0.25, b = 0.75, fixed = TRUE)
          res <- En3_normal_R(kappa = 0.1, n2 = 50, Adj = 1, 
                              alpha = 0.025, beta = 0.1, w = 0.3,
                              Delta1 = 0.375, Delta2 = 0.625, 
                              in1 = 300, in2 = 600, 
                              a = 0.25, b = 0.75, fixed = FALSE)
          res <- En3_normal_R2(kappa = 0.1, n2 = 50, Adj = 1, 
                              alpha = 0.025, beta = 0.1, w = 0.3,
                              Delta1 = 0.375, Delta2 = 0.625, 
                              in1 = 300, in2 = 600, 
                              a = 0.25, b = 0.75, fixed = FALSE)

Expected probability to do third phase III trial

Description

In the setting of Case 2: Strategy 2/2( + 1); at least two trials significant (and the treatment effect of the other one at least showing in the same direction) this function calculates the probability that a third phase III trial is necessary.

Usage

Epgo23(HRgo, d2, alpha, beta, w, hr1, hr2, id1, id2)

Arguments

HRgo

threshold value for the go/no-go decision rule

d2

total number of events in phase II

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

Value

The output of the function Epgo23() is the probability to a third phase III trial.

Examples

res <- Epgo23(HRgo = 0.8, d2 = 50,  w = 0.3, alpha = 0.025, beta = 0.1,
                               hr1 =  0.69, hr2 = 0.81, id1 = 280, id2 = 420)

Expected probability to do third phase III trial

Description

In the setting of Case 2: Strategy 2/2( + 1); at least two trials significant (and the treatment effect of the other one at least showing in the same direction) this function calculates the probability that a third phase III trial is necessary.

Usage

Epgo23_binary(RRgo, n2, alpha, beta, p0, w, p11, p12, in1, in2)

Arguments

RRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

p0

assumed true rate of control group

w

weight for mixture prior distribution

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

in1

amount of information for p11 in terms of sample size

in2

amount of information for p12 in terms of sample size

Value

The output of the function Epgo23_binary() is the probability to a third phase III trial.

Examples

res <- Epgo23_binary(RRgo = 0.8, n2 = 50,  p0 = 0.3, w = 0.3, alpha = 0.025, beta = 0.1,
                               p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600)

Expected probability to do third phase III trial

Description

In the setting of Case 2: Strategy 2/2( + 1); at least two trials significant (and the treatment effect of the other one at least showing in the same direction) this function calculates the probability that a third phase III trial is necessary.

Usage

Epgo23_normal(kappa, n2, alpha, beta, a, b, w, Delta1, Delta2, in1, in2)

Arguments

kappa

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

a

lower boundary for the truncation

b

upper boundary for the truncation

w

weight for mixture prior distribution

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

Value

The output of the function Epgo23_normal() is the probability to a third phase III trial.

Examples

Epgo23_normal(kappa = 0.1, n2 = 50, w = 0.3, alpha = 0.025, beta = 0.1, a = 0.25, b=0.75,
                               Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600)

Expected probability to go to phase III for bias adjustment programs with time-to-event outcomes

Description

In the case we do not only want do discount for overoptimistic results in phase II when calculating the sample size in phase III, but also when deciding whether to go to phase III or not the functions Epgo_L2 and Epgo_R2 are necessary. The function Epgo_L2 uses an additive adjustment parameter (i.e. adjust the lower bound of the one-sided confidence interval), the function Epgo_R2 uses a multiplicative adjustment parameter (i.e. use estimate with a retention factor)

Usage

Epgo_L2(HRgo, d2, Adj, w, hr1, hr2, id1, id2, fixed)

Epgo_R2(HRgo, d2, Adj, w, hr1, hr2, id1, id2, fixed)

Arguments

HRgo

threshold value for the go/no-go decision rule

d2

total number of events for phase II; must be even number

Adj

adjustment parameter

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

Value

The output of the functions Epgo_L2 and Epgo_R2 is the expected probability to go to phase III with conservative decision rule and sample size calculation.

Examples

res <- Epgo_L2(HRgo = 0.8, d2 = 50, Adj = 0.4,  
                                w = 0.3, hr1 = 0.69, hr2 = 0.81, 
                                id1 = 280, id2 = 420, fixed = FALSE)
          res <- Epgo_R2(HRgo = 0.8, d2 = 50, Adj = 0.9,  
                                w = 0.3, hr1 = 0.69, hr2 = 0.81, 
                                id1 = 280, id2 = 420, fixed = FALSE)

Expected probability to go to phase III for bias adjustment programs with binary distributed outcomes

Description

In the case we do not only want do discount for overoptimistic results in phase II when calculating the sample size in phase III, but also when deciding whether to go to phase III or not the functions Epgo_binary_L2 and Epgo_binary_R2 are necessary. The function Epgo_binary_L2 uses an additive adjustment parameter (i.e. adjust the lower bound of the one-sided confidence interval), the function Epgo_binary_R2 uses a multiplicative adjustment parameter (i.e. use estimate with a retention factor)

Usage

Epgo_binary_L2(RRgo, n2, Adj, p0, w, p11, p12, in1, in2, fixed)

Epgo_binary_R2(RRgo, n2, Adj, p0, w, p11, p12, in1, in2, fixed)

Arguments

RRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

Adj

adjustment parameter

p0

assumed true rate of control group

w

weight for mixture prior distribution

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

in1

amount of information for p11 in terms of sample size

in2

amount of information for p12 in terms of sample size

fixed

choose if true treatment effects are fixed or random, if TRUE p11 is used as fixed effect

Value

The output of the functions Epgo_binary_L2 and Epgo_binary_R2 is the expected number of participants in phase III with conservative decision rule and sample size calculation.

Examples

res <- Epgo_binary_L2(RRgo = 0.8, n2 = 50, Adj = 0,  p0 = 0.6,  w = 0.3,
                              p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                              fixed = FALSE)
          res <- Epgo_binary_R2(RRgo = 0.8, n2 = 50, Adj = 1,  p0 = 0.6,  w = 0.3,
                              p11 =  0.3, p12 = 0.5, in1 = 300, in2 = 600, 
                              fixed = FALSE)

Expected probability to go to phase III for bias adjustment programs with normally distributed outcomes

Description

In the case we do not only want do discount for overoptimistic results in phase II when calculating the sample size in phase III, but also when deciding whether to go to phase III or not the functions Epgo_normal_L2 and Epgo_normal_R2 are necessary. The function Epgo_normal_L2 uses an additive adjustment parameter (i.e. adjust the lower bound of the one-sided confidence interval), the function Epgo_normal_R2 uses a multiplicative adjustment parameter (i.e. use estimate with a retention factor)

Usage

Epgo_normal_L2(kappa, n2, Adj, w, Delta1, Delta2, in1, in2, a, b, fixed)

Epgo_normal_R2(kappa, n2, Adj, w, Delta1, Delta2, in1, in2, a, b, fixed)

Arguments

kappa

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

Adj

adjustment parameter

w

weight for mixture prior distribution

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

a

lower boundary for the truncation

b

upper boundary for the truncation

fixed

choose if true treatment effects are fixed or random, if TRUE Delta1 is used as fixed effect

Value

The output of the functions Epgo_normal_L2 and Epgo_normal_R2 is the expected number of participants in phase III with conservative decision rule and sample size calculation.

Examples

res <- Epgo_normal_L2(kappa = 0.1, n2 = 50, Adj = 0, w = 0.3,
                               Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                               a = 0.25, b = 0.75, fixed = FALSE)
          res <- Epgo_normal_R2(kappa = 0.1, n2 = 50, Adj = 1, w = 0.3,
                               Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                               a = 0.25, b = 0.75, fixed = FALSE)

Expected probability to go to phase III for time-to-event outcomes

Description

Expected probability to go to phase III for time-to-event outcomes

Usage

Epgo_tte(HRgo, d2, w, hr1, hr2, id1, id2, fixed)

Arguments

HRgo

threshold value for the go/no-go decision rule

d2

total number of events for phase II; must be even number

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

Value

The output of the functions Epgo_tte() is the expected probability to go to phase III.

Examples

res <- Epgo_tte(HRgo = 0.8, d2 = 50,  
                                w = 0.3, hr1 = 0.69, hr2 = 0.81, 
                                id1 = 280, id2 = 420, fixed = FALSE)

Expected sample size for phase III for multiarm programs with binary distributed outcomes

Description

Given phase II results are promising enough to get the "go"-decision to go to phase III this function now calculates the expected sample size for phase III given the cases and strategies listed below. The results of this function are necessary for calculating the utility of the program, which is then in a further step maximized by the optimal_multiarm_binary() function

Usage

Ess_binary(RRgo, n2, alpha, beta, p0, p11, p12, strategy, case)

Arguments

RRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

p0

assumed true rate of control group

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising") or 3 (both)

case

different cases: 1 ("nogo"), 21 (treatment 1 is promising, treatment 2 is not), 22 (treatment 2 is promising, treatment 1 is not), 31 (both treatments are promising, treatment 1 is better), 32 (both treatments are promising, treatment 2 is better)

Value

the function Ess_binary() returns the expected sample size for phase III when going to phase III

Examples

res <- Ess_binary(RRgo = 0.8 ,n2 = 50 ,alpha = 0.05, beta = 0.1,
                            p0 = 0.6, p11 =  0.3, p12 = 0.5,strategy = 3, case = 31)

Expected sample size for phase III for multiple endpoints with normally distributed outcomes

Description

Given phase II results are promising enough to get the "go"-decision to go to phase III this function now calculates the expected sample size for phase III. The results of this function are necessary for calculating the utility of the program, which is then in a further step maximized by the optimal_multiple_normal() function

Usage

Ess_multiple_normal(
  kappa,
  n2,
  alpha,
  beta,
  Delta1,
  Delta2,
  in1,
  in2,
  sigma1,
  sigma2,
  fixed,
  rho,
  rsamp
)

Arguments

kappa

threshold value for the go/no-go decision rule; vector for both endpoints

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

Delta1

assumed true treatment effect given as difference in means for endpoint 1

Delta2

assumed true treatment effect given as difference in means for endpoint 2

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

sigma1

standard deviation of first endpoint

sigma2

standard deviation of second endpoint

fixed

choose if true treatment effects are fixed or random, if TRUE Delta1 is used as fixed effect

rho

correlation between the two endpoints

rsamp

sample data set for Monte Carlo integration

Value

the output of the function Ess_multiple_normal is the expected number of participants in phase III


Expected sample size for phase III for multiple endpoints with normally distributed outcomes

Description

Given phase II results are promising enough to get the "go"-decision to go to phase III this function now calculates the expected sample size for phase III. The results of this function are necessary for calculating the utility of the program, which is then in a further step maximized by the optimal_multiple_tte() function

Usage

Ess_multiple_tte(HRgo, n2, alpha, beta, hr1, hr2, id1, id2, fixed, rho)

Arguments

HRgo

threshold value for the go/no-go decision rule;

n2

total sample size for phase II; must be even number

alpha

one- sided significance level

beta

1-beta power for calculation of the number of events for phase III by Schoenfeld (1981) formula

hr1

assumed true treatment effect on HR scale for endpoint OS

hr2

assumed true treatment effect on HR scale for endpoint PFS

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

fixed

choose if true treatment effects are fixed or random

rho

correlation between the two endpoints

Value

the output of the function Ess_multiple_tte() is the expected number of participants in phase III


Expected sample size for phase III for multiarm programs with normally distributed outcomes

Description

Given phase II results are promising enough to get the "go"-decision to go to phase III this function now calculates the expected sample size for phase III given the cases and strategies listed below. The results of this function are necessary for calculating the utility of the program, which is then in a further step maximized by the optimal_multiarm_normal() function

Usage

Ess_normal(kappa, n2, alpha, beta, Delta1, Delta2, strategy, case)

Arguments

kappa

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising") or 3 (both)

case

different cases: 1 ("nogo"), 21 (treatment 1 is promising, treatment 2 is not), 22 (treatment 2 is promising, treatment 1 is not), 31 (both treatments are promising, treatment 1 is better), 32 (both treatments are promising, treatment 2 is better)

Value

The function Ess_normal() returns the expected sample size for phase III when going to phase III when outcomes are normally distributed and we consider multiarm programs, i.e. several phase III trials with different doses or different treatments are performed

Examples

res <- Ess_normal(kappa = 0.1 ,n2 = 50 ,alpha = 0.05, beta = 0.1,
                            Delta1 = 0.375, Delta2 = 0.625, strategy = 3, case = 31)

Expected sample size for phase III for multiarm programs with time-to-event outcomes

Description

Given phase II results are promising enough to get the "go"-decision to go to phase III this function now calculates the expected sample size for phase III given the cases and strategies listed below. The results of this function are necessary for calculating the utility of the program, which is then in a further step maximized by the optimal_multiarm() function

Usage

Ess_tte(HRgo, n2, alpha, beta, ec, hr1, hr2, strategy, case)

Arguments

HRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

ec

control arm event rate for phase II and III

hr1

assumed true treatment effect on HR scale for treatment 1

hr2

assumed true treatment effect on HR scale for treatment 2

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising") or 3 (both)

case

different cases: 1 ("nogo"), 21 (treatment 1 is promising, treatment 2 is not), 22 (treatment 2 is promising, treatment 1 is not), 31 (both treatments are promising, treatment 1 is better), 32 (both treatments are promising, treatment 2 is better)

Value

the function Ess_tte() returns the expected sample size for phase III when going to phase III

Examples

res <- Ess_tte(HRgo = 0.8 ,n2 = 50 ,alpha = 0.05, beta = 0.1,
                            ec = 0.6, hr1 = 0.7, hr2 = 0.8, strategy = 2, case = 21)

Probability of a successful program for multiarm programs with binary distributed outcomes

Description

Given we get the "go"-decision in phase II, this functions now calculates the probability that the results of the confirmatory trial (phase III) are significant, i.e. we have a statistically relevant positive effect of the treatment.

Usage

PsProg_binary(
  RRgo,
  n2,
  alpha,
  beta,
  p0,
  p11,
  p12,
  step1,
  step2,
  strategy,
  case
)

Arguments

RRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be divisible by three

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

p0

assumed true rate of control group

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

step1

lower boundary for effect size

step2

upper boundary for effect size

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising") or 3 (both)

case

different cases: 1 ("nogo"), 21 (treatment 1 is promising, treatment 2 is not), 22 (treatment 2 is promising, treatment 1 is not), 31 (both treatments are promising, treatment 1 is better), 32 (both treatments are promising, treatment 2 is better)

Value

The function PsProg_binary() returns the probability of a successful program

Examples

res <- PsProg_binary(RRgo = 0.8 ,n2 = 50 ,alpha = 0.05, beta = 0.1,
                            p0 = 0.6, p11 =  0.3, p12 = 0.5, step1 = 1, step2 = 0.95,
                            strategy = 3, case = 31)

Probability of a successful program for multiarm programs with normally distributed outcomes

Description

Given we get the "go"-decision in phase II, this functions now calculates the probability that the results of the confirmatory trial (phase III) are significant, i.e. we have a statistically relevant positive effect of the treatment.

Usage

PsProg_normal(
  kappa,
  n2,
  alpha,
  beta,
  Delta1,
  Delta2,
  step1,
  step2,
  strategy,
  case
)

Arguments

kappa

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

step1

lower boundary for effect size

step2

upper boundary for effect size

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising") or 3 (both)

case

different cases: 1 ("nogo"), 21 (treatment 1 is promising, treatment 2 is not), 22 (treatment 2 is promising, treatment 1 is not), 31 (both treatments are promising, treatment 1 is better), 32 (both treatments are promising, treatment 2 is better)

Value

The function PsProg_normal() returns the probability of a successful program.

Examples

res <- PsProg_normal(kappa = 0.1 ,n2 = 50 ,alpha = 0.05, beta = 0.1,
                            Delta1 = 0.375, Delta2 = 0.625,  step1 = 0, step2 = 0.5,
                            strategy = 3, case = 31)

Probability of a successful program for multiarm programs with time-to-event outcomes

Description

Given we get the "go"-decision in phase II, this functions now calculates the probability that the results of the confirmatory trial (phase III) are significant, i.e. we have a statistically relevant positive effect of the treatment.

Usage

PsProg_tte(HRgo, n2, alpha, beta, ec, hr1, hr2, step1, step2, strategy, case)

Arguments

HRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be divisible by three

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

ec

control arm event rate for phase II and III

hr1

assumed true treatment effect on HR scale for treatment 1

hr2

assumed true treatment effect on HR scale for treatment 2

step1

lower boundary for effect size

step2

upper boundary for effect size

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising") or 3 (both)

case

different cases: 1 ("nogo"), 21 (treatment 1 is promising, treatment 2 is not), 22 (treatment 2 is promising, treatment 1 is not), 31 (both treatments are promising, treatment 1 is better), 32 (both treatments are promising, treatment 2 is better)

Value

The function PsProg_tte() returns the probability of a successful program

Examples

res <- PsProg_tte(HRgo = 0.8 ,n2 = 50 ,alpha = 0.05, beta = 0.1,
                            ec = 0.6, hr1 = 0.7, hr2 = 0.8, step1 = 1, step2 = 0.95,
                            strategy = 2, case = 21)

Expected probability to go to phase III for time-to-event outcomes

Description

If choosing skipII = TRUE, the program calculates the expected utility for the case when phase II is skipped and compares it to the situation when phase II is not skipped. This function calculates the expected sample size for phase III for time-to-event outcomes using a median prior.

Usage

d3_skipII_tte(alpha, beta, median_prior)

Arguments

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

median_prior

the median_prior is given as -log(hr1), the assumed true treatment effect

Value

The output of the functions d3_skipII_tte() is the expected number of events in phase III when skipping phase II.

Examples

res <- d3_skipII_tte(alpha = 0.05, beta = 0.1, median_prior = 0.35)

Density of the bivariate normal distribution

Description

Density of the bivariate normal distribution

Usage

dbivanorm(x, y, mu1, mu2, sigma1, sigma2, rho)

dbivanorm(x, y, mu1, mu2, sigma1, sigma2, rho)

Arguments

x

integral variable

y

integral variable

mu1

mean of second endpoint

mu2

mean of first endpoint

sigma1

standard deviation of first endpoint

sigma2

standard deviation of second endpoint

rho

correlation between the two endpoints

Value

The Function dbivanorm() will return the density of a bivariate normal distribution.


Utility based optimal phase II/III drug development planning

Description

The drugdevelopR package enables utility based planning of phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules. The assumed true treatment effects can be assumed fixed (planning is then also possible via user friendly R Shiny App: drugdevelopR) or modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

drugdevelopR()

drugdevelopR package and R Shiny App

The drugdevelopR package provides the functions to plan optimal phase II/III drug development programs with

where the treatment effect is assumed fixed or modelled by a prior. In these settings, optimal phase II/III drug development planning with fixed assumed treatment effects can also be done with the help of the R Shiny application basic. Extensions to the basic setting are

The R Shiny App drugdevelopR serves as homepage, navigating the different parts of drugdevelopR via links.

References

Kirchner, M., Kieser, M., Goette, H., & Schueler, A. (2016). Utility-based optimization of phase II/III programs. Statistics in Medicine, 35(2), 305-316.

Preussler, S., Kieser, M., and Kirchner, M. (2019). Optimal sample size allocation and go/no-go decision rules for phase II/III programs where several phase III trials are performed. Biometrical Journal, 61(2), 357-378.

Preussler, S., Kirchner, M., Goette, H., Kieser, M. (2019). Optimal designs for phase II/III drug development programs including methods for discounting of phase II results. Submitted to peer-review journal.

Preussler, S., Kirchner, M., Goette, H., Kieser, M. (2019). Optimal designs for multi-arm Phase II/III drug development programs. Submitted to peer-review journal.


Construct a drugdevelopResult object from a data frame

Description

This is a short wrapper for adding the "drugdevelopR" string to the list of S3 classes of a data frame.

Usage

drugdevelopResult(x, ...)

Arguments

x

data frame

Value

the same data frame, equipped with class "drugdevelopResult"


Density for the maximum of two normally distributed random variables

Description

The function fmax() will return the value of f(z), which is the value of the density function of the maximum of two normally distributed random variables.

Usage

fmax(z, mu1, mu2, sigma1, sigma2, rho)

Arguments

z

integral variable

mu1

mean of second endpoint

mu2

mean of first endpoint

sigma1

standard deviation of first endpoint

sigma2

standard deviation of second endpoint

rho

correlation between the two endpoints

Details

Z = max(X,Y) with X ~ N(mu1,sigma1^2), Y ~ N(mu2,sigma2^2)

f(z)=f1(-z)+f2(-z)

Value

The function fmax() will return the value of f(z), which is the value of the density function of the maximum of two normally distributed random variables.


Density for the minimum of two normally distributed random variables

Description

The function fmin() will return the value of f(z), which is the value of the density function of the minimum of two normally distributed random variables.

Usage

fmin(y, mu1, mu2, sigma1, sigma2, rho)

Arguments

y

integral variable

mu1

mean of second endpoint

mu2

mean of first endpoint

sigma1

standard deviation of first endpoint

sigma2

standard deviation of second endpoint

rho

correlation between the two endpoints

Details

Z= min(X,Y) with X ~ N(mu1,sigma1^2), Y ~ N(mu2,sigma2^2)

f(z)=f1(z)+f2(z)

Value

The function fmin() will return the value of f(z), which is the value of the density function of the minimum of two normally distributed random variables.


Generate sample for Monte Carlo integration in the multiple setting

Description

Generate sample for Monte Carlo integration in the multiple setting

Usage

get_sample_multiple_normal(Delta1, Delta2, in1, in2, rho)

Arguments

Delta1

assumed true treatment effect given as difference in means for endpoint 1

Delta2

assumed true treatment effect given as difference in means for endpoint 2

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

rho

correlation between the two endpoints

Value

a randomly generated data frame


Generate sample for Monte Carlo integration in the multiple setting

Description

Generate sample for Monte Carlo integration in the multiple setting

Usage

get_sample_multiple_tte(hr1, hr2, id1, id2, rho)

Arguments

hr1

assumed true treatment effect on HR scale for endpoint OS

hr2

assumed true treatment effect on HR scale for endpoint PFS

id1

amount of information for hr1 in terms of sample size

id2

amount of information for hr2 in terms of sample size

rho

correlation between the two endpoints

Value

a randomly generated data frame


Optimal phase II/III drug development planning for time-to-event endpoints when discounting phase II results

Description

The function optimal_bias of the drugdevelopR package enables planning of phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules including methods for discounting of phase II results for time-to-event endpoints (Preussler et. al, 2020). The discounting may be necessary as programs that proceed to phase III can be overoptimistic about the treatment effect (i.e. they are biased). The assumed true treatment effects can be assumed fixed (planning is then also possible via user friendly R Shiny App: bias) or modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

optimal_bias(
  w,
  hr1,
  hr2,
  id1,
  id2,
  d2min,
  d2max,
  stepd2,
  hrgomin,
  hrgomax,
  stephrgo,
  adj = "both",
  lambdamin = NULL,
  lambdamax = NULL,
  steplambda = NULL,
  alphaCImin = NULL,
  alphaCImax = NULL,
  stepalphaCI = NULL,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 1,
  stepm1 = 0.95,
  stepl1 = 0.85,
  b1,
  b2,
  b3,
  fixed = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution, see the vignette on priors as well as the Shiny app for more details concerning the definition of a prior distribution.

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

d2min

minimal number of events for phase II

d2max

maximal number of events for phase II

stepd2

stepsize for the optimization over d2

hrgomin

minimal threshold value for the go/no-go decision rule

hrgomax

maximal threshold value for the go/no-go decision rule

stephrgo

stepsize for the optimization over HRgo

adj

choose type of adjustment: "multiplicative", "additive", "both" or "all". When using "both", res[1,] contains the results using the multiplicative method and res[2,] contains the results using the additive method. When using "all", there are also res[3,] and res[4,], containing the results of a multiplicative and an additive method which do not only adjust the treatment effect but also the threshold value for the decision rule.

lambdamin

minimal multiplicative adjustment parameter lambda (i.e. use estimate with a retention factor)

lambdamax

maximal multiplicative adjustment parameter lambda (i.e. use estimate with a retention factor)

steplambda

stepsize for the adjustment parameter lambda

alphaCImin

minimal additive adjustment parameter alphaCI (i.e. adjust the lower bound of the one-sided confidence interval)

alphaCImax

maximal additive adjustment parameter alphaCI (i.e. adjust the lower bound of the one-sided confidence interval)

stepalphaCI

stepsize for alphaCI

alpha

one-sided significance level

beta

1-beta power for calculation of the number of events for phase III by Schoenfeld (1981) formula

xi2

event rate for phase II

xi3

event rate for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g., no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g., no constraint

steps1

lower boundary for effect size category "small" in HR scale, default: 1

stepm1

lower boundary for effect size category "medium" in HR scale = upper boundary for effect size category "small" in HR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in HR scale = upper boundary for effect size category "medium" in HR scale, default: 0.85

b1

expected gain for effect size category "small" in 10^5 $

b2

expected gain for effect size category "medium" in 10^5 $

b3

expected gain for effect size category "large" in 10^5 $

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

num_cl

number of clusters used for parallel computing, default: 1

Value

The output of the function is a data.frame object containing the optimization results:

Method

Type of adjustment: "multipl." (multiplicative adjustment of effect size), "add." (additive adjustment of effect size), "multipl2." (multiplicative adjustment of effect size and threshold), "add2." (additive adjustment of effect size and threshold)

Adj

optimal adjustment parameter (lambda or alphaCI according to Method)

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

HRgo

optimal threshold value for the decision rule to go to phase III

d2

optimal total number of events for phase II

d3

total expected number of events for phase III; rounded to next natural number

d

total expected number of events in the program; d = d2 + d3

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III

sProg2

probability of a successful program with "medium" treatment effect in phase III

sProg3

probability of a successful program with "large" treatment effect in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, last access 15.05.19.

Preussler, S., Kirchner, M., Goette, H., Kieser, M. (2020). Optimal designs for phase II/III drug development programs including methods for discounting of phase II results. Submitted to peer-review journal.

Schoenfeld, D. (1981). The asymptotic properties of nonparametric tests for comparing survival distributions. Biometrika, 68(1), 316-319.

Examples

# Activate progress bar (optional)
## Not run: 
progressr::handlers(global = TRUE)

## End(Not run)
# Optimize

optimal_bias(w = 0.3,                       # define parameters for prior
  hr1 = 0.69, hr2 = 0.88, id1 = 210, id2 = 420,      # (https://web.imbi.uni-heidelberg.de/prior/)
  d2min = 20, d2max = 100, stepd2 = 5,               # define optimization set for d2
  hrgomin = 0.7, hrgomax = 0.9, stephrgo = 0.05,     # define optimization set for HRgo
  adj = "both",                                      # choose type of adjustment
  lambdamin = 0.2, lambdamax = 1, steplambda = 0.05, # define optimization set for lambda
  alphaCImin = 0.025, alphaCImax = 0.5,
  stepalphaCI = 0.025,                               # define optimization set for alphaCI
  alpha = 0.025, beta = 0.1, xi2 = 0.7, xi3 = 0.7,   # drug development planning parameters
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,           # fixed/variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                        # set constraints
  steps1 = 1,                                        # define lower boundary for "small"
  stepm1 = 0.95,                                     # "medium"
  stepl1 = 0.85,                                     # and "large" effect size categories
  b1 = 1000, b2 = 2000, b3 = 3000,                   # define expected benefits 
  fixed = FALSE,                                     # true treatment effects are fixed/random
  num_cl = 1)                                        # number of coresfor parallelized computing 
  

Optimal phase II/III drug development planning when discounting phase II results with binary endpoint

Description

The function optimal_bias_binary of the drugdevelopR package enables planning of phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules including methods for discounting of phase II results for binary endpoints (Preussler et. al, 2020). The discounting may be necessary as programs that proceed to phase III can be overoptimistic about the treatment effect (i.e. they are biased). The assumed true treatment effects can be assumed fixed or modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

optimal_bias_binary(
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  n2min,
  n2max,
  stepn2,
  rrgomin,
  rrgomax,
  steprrgo,
  adj = "both",
  lambdamin = NULL,
  lambdamax = NULL,
  steplambda = NULL,
  alphaCImin = NULL,
  alphaCImax = NULL,
  stepalphaCI = NULL,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 1,
  stepm1 = 0.95,
  stepl1 = 0.85,
  b1,
  b2,
  b3,
  fixed = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution

p0

assumed true rate of control group, see here for details

p11

assumed true rate of treatment group, see here for details

p12

assumed true rate of treatment group, see here for details

in1

amount of information for p11 in terms of sample size, see here for details

in2

amount of information for p12 in terms of sample size, see here for details

n2min

minimal total sample size for phase II; must be an even number

n2max

maximal total sample size for phase II, must be an even number

stepn2

step size for the optimization over n2; must be an even number

rrgomin

minimal threshold value for the go/no-go decision rule

rrgomax

maximal threshold value for the go/no-go decision rule

steprrgo

step size for the optimization over RRgo

adj

choose type of adjustment: "multiplicative", "additive", "both" or "all". When using "both", res[1,] contains the results using the multiplicative method and res[2,] contains the results using the additive method. When using "all", there are also res[3,] and res[4,], containing the results of a multiplicative and an additive method which do not only adjust the treatment effect but also the threshold value for the decision rule.

lambdamin

minimal multiplicative adjustment parameter lambda (i.e. use estimate with a retention factor)

lambdamax

maximal multiplicative adjustment parameter lambda (i.e. use estimate with a retention factor)

steplambda

stepsize for the adjustment parameter lambda

alphaCImin

minimal additive adjustment parameter alphaCI (i.e. adjust the lower bound of the one-sided confidence interval)

alphaCImax

maximal additive adjustment parameter alphaCI (i.e. adjust the lower bound of the one-sided confidence interval)

stepalphaCI

stepsize for alphaCI

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the number of events for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in RR scale, default: 1

stepm1

lower boundary for effect size category "medium" in RR scale = upper boundary for effect size category "small" in RR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in RR scale = upper boundary for effect size category "medium" in RR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

fixed

choose if true treatment effects are fixed or random, if TRUE p11 is used as fixed effect for p1

num_cl

number of clusters used for parallel computing, default: 1

Value

The output of the function is a data.frame object containing the optimization results:

Method

Type of adjustment: "multipl." (multiplicative adjustment of effect size), "add." (additive adjustment of effect size), "multipl2." (multiplicative adjustment of effect size and threshold), "add2." (additive adjustment of effect size and threshold)

Adj

optimal adjustment parameter (lambda or alphaCI according to Method)

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

RRgo

optimal threshold value for the decision rule to go to phase III

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III

sProg2

probability of a successful program with "medium" treatment effect in phase III

sProg3

probability of a successful program with "large" treatment effect in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, assessed last 15.05.19.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

optimal_bias_binary(w = 0.3,                 # define parameters for prior
  p0 = 0.6, p11 =  0.3, p12 = 0.5,
  in1 = 30, in2 = 60,                                 # (https://web.imbi.uni-heidelberg.de/prior/)
  n2min = 20, n2max = 100, stepn2 = 10,               # define optimization set for n2
  rrgomin = 0.7, rrgomax = 0.9, steprrgo = 0.05,      # define optimization set for RRgo
  adj = "both",                                       # choose type of adjustment
  alpha = 0.025, beta = 0.1,                          # drug development planning parameters
  lambdamin = 0.2, lambdamax = 1, steplambda = 0.05,  # define optimization set for lambda
  alphaCImin = 0.025, alphaCImax = 0.5,
  stepalphaCI = 0.025,                                # define optimization set for alphaCI
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,            # fixed and variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                         # set constraints
  steps1 = 1,                                         # define lower boundary for "small"
  stepm1 = 0.95,                                      # "medium"
  stepl1 = 0.85,                                      # and "large" effect size categories
  b1 = 1000, b2 = 2000, b3 = 3000,                    # define expected benefits
  fixed = TRUE,                                       # true treatment effects are fixed/random
  num_cl = 1)                                         # number of cores for parallelized computing
  


Generic function for optimizing drug development programs with bias adjustment

Description

Generic function for optimizing drug development programs with bias adjustment

Usage

optimal_bias_generic(
  adj = "both",
  lambdamin = NULL,
  lambdamax = NULL,
  steplambda = NULL,
  alphaCImin = NULL,
  alphaCImax = NULL,
  stepalphaCI = NULL
)

Arguments

adj

choose type of adjustment: "multiplicative", "additive", "both" or "all". When using "both", res[1,] contains the results using the multiplicative method and res[2,] contains the results using the additive method. When using "all", there are also res[3,] and res[4,], containing the results of a multiplicative and an additive method which do not only adjust the treatment effect but also the threshold value for the decision rule.

lambdamin

minimal multiplicative adjustment parameter lambda (i.e. use estimate with a retention factor)

lambdamax

maximal multiplicative adjustment parameter lambda (i.e. use estimate with a retention factor)

steplambda

stepsize for the adjustment parameter lambda

alphaCImin

minimal additive adjustment parameter alphaCI (i.e. adjust the lower bound of the one-sided confidence interval)

alphaCImax

maximal additive adjustment parameter alphaCI (i.e. adjust the lower bound of the one-sided confidence interval)

stepalphaCI

stepsize for alphaCI


Optimal phase II/III drug development planning when discounting phase II results with normally distributed endpoint

Description

The function optimal_bias_normal of the drugdevelopR package enables planning of phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules including methods for discounting of phase II results for normally distributed endpoints (Preussler et. al, 2020). The discounting may be necessary as programs that proceed to phase III can be overoptimistic about the treatment effect (i.e. they are biased). The assumed true treatment effects can be assumed fixed or modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

optimal_bias_normal(
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  n2min,
  n2max,
  stepn2,
  kappamin,
  kappamax,
  stepkappa,
  adj = "both",
  lambdamin = NULL,
  lambdamax = NULL,
  steplambda = NULL,
  alphaCImin = NULL,
  alphaCImax = NULL,
  stepalphaCI = NULL,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 0,
  stepm1 = 0.5,
  stepl1 = 0.8,
  b1,
  b2,
  b3,
  fixed = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution

Delta1

assumed true prior treatment effect measured as the standardized difference in means, see here for details

Delta2

assumed true prior treatment effect measured as the standardized difference in means, see here for details

in1

amount of information for Delta1 in terms of sample size, see here for details

in2

amount of information for Delta2 in terms of sample size, see here for details

a

lower boundary for the truncation of the prior distribution

b

upper boundary for the truncation of the prior distribution

n2min

minimal total sample size for phase II; must be an even number

n2max

maximal total sample size for phase II, must be an even number

stepn2

step size for the optimization over n2; must be an even number

kappamin

minimal threshold value kappa for the go/no-go decision rule

kappamax

maximal threshold value kappa for the go/no-go decision rule

stepkappa

step size for the optimization over the threshold value kappa

adj

choose type of adjustment: "multiplicative", "additive", "both" or "all". When using "both", res[1,] contains the results using the multiplicative method and res[2,] contains the results using the additive method. When using "all", there are also res[3,] and res[4,], containing the results of a multiplicative and an additive method which do not only adjust the treatment effect but also the threshold value for the decision rule.

lambdamin

minimal multiplicative adjustment parameter lambda (i.e. use estimate with a retention factor)

lambdamax

maximal multiplicative adjustment parameter lambda (i.e. use estimate with a retention factor)

steplambda

stepsize for the adjustment parameter lambda

alphaCImin

minimal additive adjustment parameter alphaCI (i.e. adjust the lower bound of the one-sided confidence interval)

alphaCImax

maximal additive adjustment parameter alphaCI (i.e. adjust the lower bound of the one-sided confidence interval)

stepalphaCI

stepsize for alphaCI

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the sample size for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small", default: 0

stepm1

lower boundary for effect size category "medium" = upper boundary for effect size category "small" default: 0.5

stepl1

lower boundary for effect size category "large" = upper boundary for effect size category "medium", default: 0.8

b1

expected gain for effect size category "small" in 10^5 $

b2

expected gain for effect size category "medium" in 10^5 $

b3

expected gain for effect size category "large" in 10^5 $

fixed

choose if true treatment effects are fixed or following a prior distribution, if TRUE Delta1 is used as fixed effect

num_cl

number of clusters used for parallel computing, default: 1

Value

The output of the function is a data.frame object containing the optimization results:

Method

Type of adjustment: "multipl." (multiplicative adjustment of effect size), "add." (additive adjustment of effect size), "multipl2." (multiplicative adjustment of effect size and threshold), "add2." (additive adjustment of effect size and threshold)

Adj

optimal adjustment parameter (lambda or alphaCI according to Method)

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

Kappa

optimal threshold value for the decision rule to go to phase III

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III

sProg2

probability of a successful program with "medium" treatment effect in phase III

sProg3

probability of a successful program with "large" treatment effect in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

Cohen, J. (1988). Statistical power analysis for the behavioral sciences.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

optimal_bias_normal(w=0.3,             # define parameters for prior
  Delta1 = 0.375, Delta2 = 0.625, in1=300, in2=600,    # (https://web.imbi.uni-heidelberg.de/prior/)
  a = 0.25, b = 0.75,
  n2min = 20, n2max = 100, stepn2 = 10,                # define optimization set for n2
  kappamin = 0.02, kappamax = 0.2, stepkappa = 0.02,   # define optimization set for kappa
  adj = "both",                                        # choose type of adjustment
  lambdamin = 0.2, lambdamax = 1, steplambda = 0.05,   # define optimization set for lambda
  alphaCImin = 0.025, alphaCImax = 0.5,
  stepalphaCI = 0.025,                                 # define optimization set for alphaCI
  alpha = 0.025, beta = 0.1,                           # drug development planning parameters
  c2 = 0.675, c3 = 0.72, c02 = 15, c03 = 20,           # fixed and variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                          # set constraints
  steps1 = 0,                                          # define lower boundary for "small"
  stepm1 = 0.5,                                        # "medium"
  stepl1 = 0.8,                                        # and "large" effect size categories
  b1 = 3000, b2 = 8000, b3 = 10000,                    # define expected benefits
  fixed = TRUE,                                        # true treatment effects are fixed/random
  num_cl = 1)                                          # number of coresfor parallelized computing 
                                          

Optimal phase II/III drug development planning with binary endpoint

Description

The optimal_binary function of the drugdevelopR package enables planning of phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules for binary endpoints. In this case, the treatment effect is measured by the risk ratio (RR). The assumed true treatment effects can be assumed to be fixed or modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

optimal_binary(
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  n2min,
  n2max,
  stepn2,
  rrgomin,
  rrgomax,
  steprrgo,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 1,
  stepm1 = 0.95,
  stepl1 = 0.85,
  b1,
  b2,
  b3,
  gamma = 0,
  fixed = FALSE,
  skipII = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution

p0

assumed true rate of control group, see here for details

p11

assumed true rate of treatment group, see here for details

p12

assumed true rate of treatment group, see here for details

in1

amount of information for p11 in terms of sample size, see here for details

in2

amount of information for p12 in terms of sample size, see here for details

n2min

minimal total sample size for phase II; must be an even number

n2max

maximal total sample size for phase II, must be an even number

stepn2

step size for the optimization over n2; must be an even number

rrgomin

minimal threshold value for the go/no-go decision rule

rrgomax

maximal threshold value for the go/no-go decision rule

steprrgo

step size for the optimization over RRgo

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the number of events for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in RR scale, default: 1

stepm1

lower boundary for effect size category "medium" in RR scale = upper boundary for effect size category "small" in RR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in RR scale = upper boundary for effect size category "medium" in RR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

gamma

to model different populations in phase II and III choose gamma != 0, default: 0, see here for details

fixed

choose if true treatment effects are fixed or random, if TRUE p11 is used as fixed effect for p1

skipII

skipII choose if skipping phase II is an option, default: FALSE; if TRUE, the program calculates the expected utility for the case when phase II is skipped and compares it to the situation when phase II is not skipped. The results are then returned as a two-row data frame, res[1, ] being the results when including phase II and res[2, ] when skipping phase II. res[2, ] has an additional parameter, res[2, ]$median_prior_RR, which is the assumed effect size used for planning the phase III study when the phase II is skipped.

num_cl

number of clusters used for parallel computing, default: 1

Value

The output of the function is a data.frame object containing the optimization results:

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

RRgo

optimal threshold value for the decision rule to go to phase III

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III

sProg2

probability of a successful program with "medium" treatment effect in phase III

sProg3

probability of a successful program with "large" treatment effect in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, last access 15.05.19.

Examples

# Activate progress bar (optional)
## Not run: 
progressr::handlers(global = TRUE)

## End(Not run)
# Optimize

optimal_binary(w = 0.3,                             # define parameters for prior
  p0 = 0.6, p11 =  0.3, p12 = 0.5,
  in1 = 30, in2 = 60,                               # (https://web.imbi.uni-heidelberg.de/prior/)
  n2min = 20, n2max = 100, stepn2 = 4,              # define optimization set for n2
  rrgomin = 0.7, rrgomax = 0.9, steprrgo = 0.05,    # define optimization set for RRgo
  alpha = 0.025, beta = 0.1,                        # drug development planning parameters
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,          # fixed and variable costs for phase II/III,
  K = Inf, N = Inf, S = -Inf,                       # set constraints
  steps1 = 1,                                       # define lower boundary for "small"
  stepm1 = 0.95,                                    # "medium"
  stepl1 = 0.85,                                    # and "large" treatment effect size categories
  b1 = 1000, b2 = 2000, b3 = 3000,                  # define expected benefits
  gamma = 0,                                        # population structures in phase II/III
  fixed = FALSE,                                    # true treatment effects are fixed/random
  skipII = FALSE,                                   # choose if skipping phase II is an option
  num_cl = 2)                                       # number of cores for parallelized computing
  

Generic function for optimizing programs with binary endpoints

Description

Generic function for optimizing programs with binary endpoints

Usage

optimal_binary_generic(
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  n2min,
  n2max,
  stepn2,
  rrgomin,
  rrgomax,
  steprrgo,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 1,
  stepm1 = 0.95,
  stepl1 = 0.85,
  b1,
  b2,
  b3,
  gamma = 0,
  fixed = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution

p0

assumed true rate of control group, see here for details

p11

assumed true rate of treatment group, see here for details

p12

assumed true rate of treatment group, see here for details

in1

amount of information for p11 in terms of sample size, see here for details

in2

amount of information for p12 in terms of sample size, see here for details

n2min

minimal total sample size for phase II; must be an even number

n2max

maximal total sample size for phase II, must be an even number

stepn2

step size for the optimization over n2; must be an even number

rrgomin

minimal threshold value for the go/no-go decision rule

rrgomax

maximal threshold value for the go/no-go decision rule

steprrgo

step size for the optimization over RRgo

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the number of events for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in RR scale, default: 1

stepm1

lower boundary for effect size category "medium" in RR scale = upper boundary for effect size category "small" in RR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in RR scale = upper boundary for effect size category "medium" in RR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

gamma

to model different populations in phase II and III choose gamma != 0, default: 0, see here for details

fixed

choose if true treatment effects are fixed or random, if TRUE p11 is used as fixed effect for p1

num_cl

number of clusters used for parallel computing, default: 1


Generic function for optimizing drug development programs

Description

Generic function for optimizing drug development programs

Usage

optimal_generic(beta, alpha, c2, c3, c02, c03, K, N, S, b1, b2, b3, num_cl)

Arguments

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

num_cl

number of clusters used for parallel computing, default: 1


Optimal phase II/III drug development planning for multi-arm programs with time-to-event endpoint

Description

The function optimal_multiarm of the drugdevelopR package enables planning of multi-arm phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules (Preussler et. al, 2019) for time-to-event endpoints. So far, only three-arm trials with two treatments and one control are supported. The assumed true treatment effects are assumed fixed (planning is also possible via user-friendly R Shiny App: multiarm). Fast computing is enabled by parallel programming.

Usage

optimal_multiarm(
  hr1,
  hr2,
  ec,
  n2min,
  n2max,
  stepn2,
  hrgomin,
  hrgomax,
  stephrgo,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 1,
  stepm1 = 0.95,
  stepl1 = 0.85,
  b1,
  b2,
  b3,
  strategy,
  num_cl = 1
)

Arguments

hr1

assumed true treatment effect on HR scale for treatment 1

hr2

assumed true treatment effect on HR scale for treatment 2

ec

control arm event rate for phase II and III

n2min

minimal total sample size in phase II, must be divisible by 3

n2max

maximal total sample size in phase II, must be divisible by 3

stepn2

stepsize for the optimization over n2, must be divisible by 3

hrgomin

minimal threshold value for the go/no-go decision rule

hrgomax

maximal threshold value for the go/no-go decision rule

stephrgo

step size for the optimization over HRgo

alpha

one-sided significance level/family-wise error rate

beta

type-II error rate for any pair, i.e. 1 - beta is the (any-pair) power for calculation of the number of events for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in HR scale, default: 1

stepm1

lower boundary for effect size category "medium" in HR scale = upper boundary for effect size category "small" in HR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in HR scale = upper boundary for effect size category "medium" in HR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

strategy

choose strategy: 1 (only the best promising candidate), 2 (all promising candidates) or 3 (both strategies)

num_cl

number of clusters used for parallel computing, default: 1

Value

The output of the function is a data.frame object containing the optimization results:

Strategy

Strategy, 1: "only best promising" or 2: "all promising"

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

HRgo

optimal threshold value for the decision rule to go to phase III

d2

optimal total number of events for phase II

d3

total expected number of events for phase III; rounded to next natural number

d

total expected number of events in the program; d = d2 + d3

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg2

probability of a successful program with two arms in phase III

sProg3

probability of a successful program with three arms in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

Preussler, S., Kirchner, M., Goette, H., Kieser, M. (2019). Optimal Designs for Multi-Arm Phase II/III Drug Development Programs. Submitted to peer-review journal.

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, assessed last 15.05.19.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

optimal_multiarm(hr1 = 0.75, hr2 = 0.80,    # define assumed true HRs 
  ec = 0.6,                                          # control arm event rate
  n2min = 30, n2max = 90, stepn2 = 6,                # define optimization set for n2
  hrgomin = 0.7, hrgomax = 0.9, stephrgo = 0.05,     # define optimization set for HRgo
  alpha = 0.025, beta = 0.1,                         # drug development planning parameters
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,           # fixed/variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                        # set constraints
  steps1 = 1,                                        # define lower boundary for "small"
  stepm1 = 0.95,                                     # "medium"
  stepl1 = 0.85,                                     # and "large" effect size categories
  b1 = 1000, b2 = 2000, b3 = 3000,                   # define expected benefit 
  strategy = 1,                                      # choose strategy: 1, 2 or 3
  num_cl = 1)                                        # number of cores for parallelized computing 
  


Optimal phase II/III drug development planning for multi-arm programs with binary endpoint

Description

The optimal_multiarm_binary function enables planning of multi-arm phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules. For binary endpoints the treatment effect is measured by the risk ratio (RR). So far, only three-arm trials with two treatments and one control are supported. The assumed true treatment effects can be assumed fixed or modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

optimal_multiarm_binary(
  p0,
  p11,
  p12,
  n2min,
  n2max,
  stepn2,
  rrgomin,
  rrgomax,
  steprrgo,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 1,
  stepm1 = 0.95,
  stepl1 = 0.85,
  b1,
  b2,
  b3,
  strategy,
  num_cl = 1
)

Arguments

p0

assumed true rate of the control group

p11

assumed true rate of the treatment arm 1

p12

assumed true rate of treatment arm 2

n2min

minimal total sample size in phase II, must be divisible by 3

n2max

maximal total sample size in phase II, must be divisible by 3

stepn2

stepsize for the optimization over n2, must be divisible by 3

rrgomin

minimal threshold value for the go/no-go decision rule

rrgomax

maximal threshold value for the go/no-go decision rule

steprrgo

step size for the optimization over RRgo

alpha

one-sided significance level/family-wise error rate

beta

type-II error rate for any pair, i.e. 1 - beta is the (any-pair) power for calculation of the sample size for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in RR scale, default: 1

stepm1

lower boundary for effect size category "medium" in RR scale = upper boundary for effect size category "small" in RR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in RR scale = upper boundary for effect size category "medium" in RR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

strategy

choose strategy: 1 (only the best promising candidate), 2 (all promising candidates) or 3 (both strategies)

num_cl

number of clusters used for parallel computing, default: 1

Value

The output of the function is a data.frame object containing the optimization results:

Strategy

Strategy, 1: "only best promising" or 2: "all promising"

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

RRgo

optimal threshold value for the decision rule to go to phase III

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg2

probability of a successful program with two arms in phase III

sProg3

probability of a successful program with three arms in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, assessed last 15.05.19.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

optimal_multiarm_binary( p0 = 0.6, 
  p11 =  0.3, p12 = 0.5, 
  n2min = 20, n2max = 100, stepn2 = 4,               # define optimization set for n2
  rrgomin = 0.7, rrgomax = 0.9, steprrgo = 0.05,     # define optimization set for RRgo
  alpha = 0.025, beta = 0.1,                         # drug development planning parameters
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,           # fixed/variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                        # set constraints
  steps1 = 1,                                        # define lower boundary for "small"
  stepm1 = 0.95,                                     # "medium"
  stepl1 = 0.85,                                     # and "large" effect size categories
  b1 = 1000, b2 = 2000, b3 = 3000,                   # define expected benefits 
  strategy = 1, num_cl = 1)                          # number of cores for parallelized computing 
  

Generic function for optimizing multi-arm programs

Description

Generic function for optimizing multi-arm programs

Usage

optimal_multiarm_generic(
  n2min,
  n2max,
  stepn2,
  beta,
  alpha,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  strategy,
  num_cl
)

Arguments

n2min

minimal total sample size in phase II, must be divisible by 3

n2max

maximal total sample size in phase II, must be divisible by 3

stepn2

stepsize for the optimization over n2, must be divisible by 3

beta

type-II error rate for any pair, i.e. 1 - beta is the (any-pair) power for calculation of the sample size for phase III

alpha

one-sided significance level/family-wise error rate

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

strategy

choose strategy: 1 (only the best promising candidate), 2 (all promising candidates) or 3 (both strategies)

num_cl

number of clusters used for parallel computing, default: 1


Optimal phase II/III drug development planning for multi-arm programs with normally distributed endpoint

Description

The optimal_multiarm_normal function enables planning of multi-arm phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules. For normally distributed endpoints, the treatment effect is measured by the standardized difference in means (Delta). So far, only three-arm trials with two treatments and one control are supported. The assumed true treatment effects can be assumed fixed or modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

optimal_multiarm_normal(
  Delta1,
  Delta2,
  n2min,
  n2max,
  stepn2,
  kappamin,
  kappamax,
  stepkappa,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 0,
  stepm1 = 0.5,
  stepl1 = 0.8,
  b1,
  b2,
  b3,
  strategy,
  num_cl = 1
)

Arguments

Delta1

assumed true treatment effect as the standardized difference in means for treatment arm 1

Delta2

assumed true treatment effect as the standardized difference in means for treatment arm 2

n2min

minimal total sample size in phase II, must be divisible by 3

n2max

maximal total sample size in phase II, must be divisible by 3

stepn2

stepsize for the optimization over n2, must be divisible by 3

kappamin

minimal threshold value kappa for the go/no-go decision rule

kappamax

maximal threshold value kappa for the go/no-go decision rule

stepkappa

step size for the optimization over the threshold value kappa

alpha

one-sided significance level/family-wise error rate

beta

type-II error rate for any pair, i.e. 1 - beta is the (any-pair) power for calculation of the sample size for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small", default: 0

stepm1

lower boundary for effect size category "medium" = upper boundary for effect size category "small" default: 0.5

stepl1

lower boundary for effect size category "large" = upper boundary for effect size category "medium", default: 0.8

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

strategy

choose strategy: 1 (only the best promising candidate), 2 (all promising candidates) or 3 (both strategies)

num_cl

number of clusters used for parallel computing, default: 1

Value

The output of the function is a data.frame object containing the optimization results:

Strategy

Strategy, 1: "only best promising" or 2: "all promising"

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

Kappa

optimal threshold value for the decision rule to go to phase III

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg2

probability of a successful program with two arms in phase III

sProg3

probability of a successful program with three arms in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

Cohen, J. (1988). Statistical power analysis for the behavioral sciences.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

optimal_multiarm_normal(Delta1 = 0.375, Delta2 = 0.625,     
  n2min = 20, n2max = 100, stepn2 = 4,                 # define optimization set for n2
  kappamin = 0.02, kappamax = 0.2, stepkappa = 0.02,   # define optimization set for kappa
  alpha = 0.025, beta = 0.1,                           # drug development planning parameters
  c2 = 0.675, c3 = 0.72, c02 = 15, c03 = 20,           # fixed/variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                          # set constraints
  steps1 = 0,                                          # define lower boundary for "small"
  stepm1 = 0.5,                                        # "medium"
  stepl1 = 0.8,                                        # and "large" effect size categories
  b1 = 3000, b2 = 8000, b3 = 10000,                    # define expected benefits 
  strategy = 1,
  num_cl = 1)                                          # number of cores for parallelized computing 
  

Generic function for optimizing drug development programs with multiple endpoints

Description

This function is only used for documentation generation.

Usage

optimal_multiple_generic(rho, alpha, n2min, n2max, stepn2, fixed)

Arguments

rho

correlation between the two endpoints

alpha

one-sided significance level/family-wise error rate

n2min

minimal total sample size in phase II, must be divisible by 3

n2max

maximal total sample size in phase II, must be divisible by 3

stepn2

stepsize for the optimization over n2, must be divisible by 3

fixed

assumed fixed treatment effect

Value

No return value


Optimal phase II/III drug development planning for programs with multiple normally distributed endpoints

Description

The function optimal_multiple_normal of the drugdevelopR package enables planning of phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules for two-arm trials with two normally distributed endpoints and one control group (Preussler et. al, 2019).

Usage

optimal_multiple_normal(
  Delta1,
  Delta2,
  in1,
  in2,
  sigma1,
  sigma2,
  n2min,
  n2max,
  stepn2,
  kappamin,
  kappamax,
  stepkappa,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 0,
  stepm1 = 0.5,
  stepl1 = 0.8,
  b1,
  b2,
  b3,
  rho,
  fixed,
  relaxed = FALSE,
  num_cl = 1
)

Arguments

Delta1

assumed true treatment effect for endpoint 1 measured as the difference in means

Delta2

assumed true treatment effect for endpoint 2 measured as the difference in means

in1

amount of information for Delta1 in terms of number of events

in2

amount of information for Delta2 in terms of number of events

sigma1

variance of endpoint 1

sigma2

variance of endpoint 2

n2min

minimal total sample size in phase II, must be divisible by 3

n2max

maximal total sample size in phase II, must be divisible by 3

stepn2

stepsize for the optimization over n2, must be divisible by 3

kappamin

minimal threshold value kappa for the go/no-go decision rule

kappamax

maximal threshold value kappa for the go/no-go decision rule

stepkappa

step size for the optimization over the threshold value kappa

alpha

one-sided significance level/family-wise error rate

beta

type-II error rate for any pair, i.e. 1 - beta is the (any-pair) power for calculation of the sample size for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small", default: 0

stepm1

lower boundary for effect size category "medium" = upper boundary for effect size category "small" default: 0.5

stepl1

lower boundary for effect size category "large" = upper boundary for effect size category "medium", default: 0.8

b1

expected gain for effect size category "small" in 10^5 $

b2

expected gain for effect size category "medium" in 10^5 $

b3

expected gain for effect size category "large" in 10^5 $

rho

correlation between the two endpoints

fixed

assumed fixed treatment effect

relaxed

relaxed or strict decision rule

num_cl

number of clusters used for parallel computing, default: 1

Details

For this setting, the drug development program is defined to be successful if it proceeds from phase II to phase III and all endpoints show a statistically significant treatment effect in phase III. For example, this situation is found in Alzheimer’s disease trials, where a drug should show significant results in improving cognition (cognitive endpoint) as well as in improving activities of daily living (functional endpoint).

The effect size categories small, medium and large are applied to both endpoints. In order to define an overall effect size from the two individual effect sizes, the function implements two different combination rules:

Fast computing is enabled by parallel programming.

Monte Carlo simulations are applied for calculating utility, event count and other operating characteristics in this setting. Hence, the results are affected by random uncertainty.

Value

The output of the function is a data.frame object containing the optimization results:

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

Kappa

optimal threshold value for the decision rule to go to phase III

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III

sProg2

probability of a successful program with "medium" treatment effect in phase III

sProg3

probability of a successful program with "large" treatment effect in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

Meinhard Kieser, Marietta Kirchner, Eva Dölger, Heiko Götte (2018). Optimal planning of phase II/III programs for clinical trials with multiple endpoints

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, assessed last 15.05.19.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

set.seed(123) # This function relies on Monte Carlo integration
optimal_multiple_normal(Delta1 = 0.75,
  Delta2 = 0.80, in1=300, in2=600,                   # define assumed true HRs
  sigma1 = 8, sigma2= 12,                            # variances for both endpoints
  n2min = 30, n2max = 90, stepn2 = 10,               # define optimization set for n2
  kappamin = 0.05, kappamax = 0.2, stepkappa = 0.05, # define optimization set for HRgo
  alpha = 0.025, beta = 0.1,                         # planning parameters
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,           # fixed/variable costs: phase II/III
  K = Inf, N = Inf, S = -Inf,                        # set constraints
  steps1 = 0,                                        # define lower boundary for "small"
  stepm1 = 0.5,                                      # "medium"
  stepl1 = 0.8,                                      # and "large" effect size categories
  b1 = 1000, b2 = 2000, b3 = 3000,                   # define expected benefit
  rho = 0.5, relaxed = TRUE,                         # strict or relaxed rule
  fixed = TRUE,                                      # treatment effect
  num_cl = 1)                                        # parallelized computing
  


Optimal phase II/III drug development planning for programs with multiple time-to-event endpoints

Description

The function optimal_multiple_tte of the drugdevelopR package enables planning of phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules (Preussler et. al, 2019) in a two-arm trial with two time-to-event endpoints.

Usage

optimal_multiple_tte(
  hr1,
  hr2,
  id1,
  id2,
  n2min,
  n2max,
  stepn2,
  hrgomin,
  hrgomax,
  stephrgo,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  b11,
  b21,
  b31,
  b12,
  b22,
  b32,
  steps1 = 1,
  stepm1 = 0.95,
  stepl1 = 0.85,
  rho,
  fixed = TRUE,
  num_cl = 1
)

Arguments

hr1

assumed true treatment effect on HR scale for endpoint 1 (e.g. OS)

hr2

assumed true treatment effect on HR scale for endpoint 2 (e.g. PFS)

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

n2min

minimal total sample size in phase II, must be divisible by 3

n2max

maximal total sample size in phase II, must be divisible by 3

stepn2

stepsize for the optimization over n2, must be divisible by 3

hrgomin

minimal threshold value for the go/no-go decision rule

hrgomax

maximal threshold value for the go/no-go decision rule

stephrgo

step size for the optimization over HRgo

alpha

one-sided significance level/family-wise error rate

beta

type-II error rate for any pair, i.e. 1 - beta is the (any-pair) power for calculation of the number of events for phase III

c2

variable per-patient cost for phase II in 10^5 $.

c3

variable per-patient cost for phase III in 10^5 $.

c02

fixed cost for phase II in 10^5 $.

c03

fixed cost for phase III in 10^5 $.

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

b11

expected gain for effect size category "small" if endpoint 1 is significant (and endpoint 2 may or may not be significant)

b21

expected gain for effect size category "medium" if endpoint 1 is significant (and endpoint 2 may or may not be significant)

b31

expected gain for effect size category "large" if endpoint 1 is significant (and endpoint 2 may or may not be significant)

b12

expected gain for effect size category "small" if endpoint 1 is not significant, but endpoint 2 is

b22

expected gain for effect size category "medium"if endpoint 1 is not significant, but endpoint 2 is

b32

expected gain for effect size category "large" if endpoint 1 is not significant, but endpoint 2 is

steps1

lower boundary for effect size category "small" in HR scale, default: 1

stepm1

lower boundary for effect size category "medium" in HR scale = upper boundary for effect size category "small" in HR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in HR scale = upper boundary for effect size category "medium" in HR scale, default: 0.85

rho

correlation between the two endpoints

fixed

assumed fixed treatment effect

num_cl

number of clusters used for parallel computing, default: 1

Details

In this setting, the drug development program is defined to be successful if it proceeds from phase II to phase III and at least one endpoint shows a statistically significant treatment effect in phase III. For example, this situation is found in oncology trials, where overall survival (OS) and progression-free survival (PFS) are the two endpoints of interest.

The gain of a successful program may differ according to the importance of the endpoint that is significant. If endpoint 1 is significant (no matter whether endpoint 2 is significant or not), then the gains b11, b21 and b31 will be used for calculation of the utility. If only endpoint 2 is significant, then b12, b22 and b32 will be used. This also matches the oncology example, where OS (i.e. endpoint 1) implicates larger expected gains than PFS alone (i.e. endpoint 2).

Fast computing is enabled by parallel programming.

Monte Carlo simulations are applied for calculating utility, event count and other operating characteristics in this setting. Hence, the results are affected by random uncertainty. The extent of uncertainty is discussed in (Kieser et al. 2018).

Value

The output of the function is a data.frame object containing the optimization results:

OP

probability that one endpoint is significant

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

HRgo

optimal threshold value for the decision rule to go to phase III

d2

optimal total number of events for phase II

d3

total expected number of events for phase III; rounded to next natural number

d

total expected number of events in the program; d = d2 + d3

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III

sProg2

probability of a successful program with "medium" treatment effect in phase III

sProg3

probability of a successful program with "large" treatment effect in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

Kieser, M., Kirchner, M. Dölger, E., Götte, H. (2018).Optimal planning of phase II/III programs for clinical trials with multiple endpoints, Pharm Stat. 2018 Sep; 17(5):437-457.

Preussler, S., Kirchner, M., Goette, H., Kieser, M. (2019). Optimal Designs for Multi-Arm Phase II/III Drug Development Programs. Submitted to peer-review journal.

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, assessed last 15.05.19.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

set.seed(123) # This function relies on Monte Carlo integration
optimal_multiple_tte(hr1 = 0.75,
  hr2 = 0.80, id1 = 210, id2 = 420,          # define assumed true HRs
  n2min = 30, n2max = 90, stepn2 = 6,        # define optimization set for n2
  hrgomin = 0.7, hrgomax = 0.9, stephrgo = 0.05, # define optimization set for HRgo
  alpha = 0.025, beta = 0.1,                 # drug development planning parameters
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,   # fixed/variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                # set constraints
  steps1 = 1,                                # define lower boundary for "small"
  stepm1 = 0.95,                             # "medium"
  stepl1 = 0.85,                             # and "large" effect size categories
  b11 = 1000, b21 = 2000, b31 = 3000,
  b12 = 1000, b22 = 1500, b32 = 2000,        # define expected benefits (both scenarios)
  rho = 0.6, fixed = TRUE,                   # correlation and treatment effect
  num_cl = 1)                                # number of cores for parallelized computing
 


Optimal phase II/III drug development planning where several phase III trials are performed for time-to-event endpoints

Description

The function optimal_multitrial of the drugdevelopR package enables planning of phase II/III drug development programs with time-to-event endpoints for programs with several phase III trials (Preussler et. al, 2019). Its main output values are the optimal sample size allocation and optimal go/no-go decision rules. The assumed true treatment effects can be assumed to be fixed (planning is then also possible via user friendly R Shiny App: multitrial) or can be modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

optimal_multitrial(
  w,
  hr1,
  hr2,
  id1,
  id2,
  d2min,
  d2max,
  stepd2,
  hrgomin,
  hrgomax,
  stephrgo,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  b1,
  b2,
  b3,
  case,
  strategy = TRUE,
  fixed = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution, see this Shiny application for the choice of weights

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

d2min

minimal number of events for phase II

d2max

maximal number of events for phase II

stepd2

step size for the optimization over d2

hrgomin

minimal threshold value for the go/no-go decision rule

hrgomax

maximal threshold value for the go/no-go decision rule

stephrgo

step size for the optimization over HRgo

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the number of events for phase III by Schoenfeld's formula (Schoenfeld 1981)

xi2

assumed event rate for phase II, used for calculating the sample size of phase II via n2 = d2/xi2

xi3

event rate for phase III, used for calculating the sample size of phase III in analogy to xi2

c2

variable per-patient cost for phase II in 10^5 $.

c3

variable per-patient cost for phase III in 10^5 $.

c02

fixed cost for phase II in 10^5 $.

c03

fixed cost for phase III in 10^5 $.

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

strategy

choose strategy: "conduct 1, 2, 3 or 4 trials in order to achieve the case's goal"; TRUE calculates all strategies of the selected case

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as a fixed effect and hr2 is ignored

num_cl

number of clusters used for parallel computing, default: 1

Value

The output of the function is a data.frame object containing the optimization results:

Case

Case: "number of significant trials needed"

Strategy

Strategy: "number of trials to be conducted in order to achieve the goal of the case"

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

HRgo

optimal threshold value for the decision rule to go to phase III

d2

optimal total number of events for phase II

d3

total expected number of events for phase III; rounded to next natural number

d

total expected number of events in the program; d = d2 + d3

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III (lower boundary in HR scale is set to 1, as proposed by IQWiG (2016))

sProg2

probability of a successful program with "medium" treatment effect in phase III (lower boundary in HR scale is set to 0.95, as proposed by IQWiG (2016))

sProg3

probability of a successful program with "large" treatment effect in phase III (lower boundary in HR scale is set to 0.85, as proposed by IQWiG (2016))

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

Effect sizes

In other settings, the definition of "small", "medium" and "large" effect sizes can be user-specified using the input parameters steps1, stepm1 and stepl1. Due to the complexity of the multitrial setting, this feature is not included for this setting. Instead, the effect sizes were set to to predefined values as explained under sProg1, sProg2 and sProg3 in the Value section.

References

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, assessed last 15.05.19.

Preussler, S., Kieser, M., and Kirchner, M. (2019). Optimal sample size allocation and go/no-go decision rules for phase II/III programs where several phase III trials are performed. Biometrical Journal, 61(2), 357-378.

Schoenfeld, D. (1981). The asymptotic properties of nonparametric tests for comparing survival distributions. Biometrika, 68(1), 316-319.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

optimal_multitrial(w = 0.3,                # define parameters for prior
  hr1 = 0.69, hr2 = 0.88, id1 = 210, id2 = 420,     # (https://web.imbi.uni-heidelberg.de/prior/)
  d2min = 20, d2max = 100, stepd2 = 5,              # define optimization set for d2
  hrgomin = 0.7, hrgomax = 0.9, stephrgo = 0.05,    # define optimization set for HRgo
  alpha = 0.025, beta = 0.1, xi2 = 0.7, xi3 = 0.7,  # drug development planning parameters
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,          # fixed and variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                       # set constraints
  b1 = 1000, b2 = 2000, b3 = 3000,                  # expected benefit for each effect size
  case = 1, strategy = TRUE,                        # chose Case and Strategy
  fixed = TRUE,                                     # true treatment effects are fixed/random
  num_cl = 1)                                       # number of cores for parallelized computing


Optimal phase II/III drug development planning where several phase III trials are performed

Description

The optimal_multitrial_binary function enables planning of phase II/III drug development programs with several phase III trials for the same binary endpoint. The main output values are optimal sample size allocation and go/no-go decision rules. For binary endpoints, the treatment effect is measured by the risk ratio (RR).

Usage

optimal_multitrial_binary(
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  n2min,
  n2max,
  stepn2,
  rrgomin,
  rrgomax,
  steprrgo,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  b1,
  b2,
  b3,
  case,
  strategy = TRUE,
  fixed = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution

p0

assumed true rate of control group, see here for details

p11

assumed true rate of treatment group, see here for details

p12

assumed true rate of treatment group, see here for details

in1

amount of information for p11 in terms of sample size, see here for details

in2

amount of information for p12 in terms of sample size, see here for details

n2min

minimal total sample size for phase II; must be an even number

n2max

maximal total sample size for phase II, must be an even number

stepn2

step size for the optimization over n2; must be an even number

rrgomin

minimal threshold value for the go/no-go decision rule

rrgomax

maximal threshold value for the go/no-go decision rule

steprrgo

step size for the optimization over RRgo

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the number of events for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

strategy

choose strategy: "conduct 1, 2, 3 or 4 trials in order to achieve the case's goal"; TRUE calculates all strategies of the selected case

fixed

choose if true treatment effects are fixed or random, if TRUE p11 is used as fixed effect for p1

num_cl

number of clusters used for parallel computing, default: 1

Details

The assumed true treatment effects can be assumed fixed or modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package.

Fast computing is enabled by parallel programming.

Value

The output of the function is a data.frame object containing the optimization results:

Case

Case: "number of significant trials needed"

Strategy

Strategy: "number of trials to be conducted in order to achieve the goal of the case"

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

RRgo

optimal threshold value for the decision rule to go to phase III

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III (lower boundary in HR scale is set to 1, as proposed by IQWiG (2016))

sProg2

probability of a successful program with "medium" treatment effect in phase III (lower boundary in HR scale is set to 0.95, as proposed by IQWiG (2016))

sProg3

probability of a successful program with "large" treatment effect in phase III (lower boundary in HR scale is set to 0.85, as proposed by IQWiG (2016))

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

Effect sizes

In other settings, the definition of "small", "medium" and "large" effect sizes can be user-specified using the input parameters steps1, stepm1 and stepl1. Due to the complexity of the multitrial setting, this feature is not included for this setting. Instead, the effect sizes were set to to predefined values as explained under sProg1, sProg2 and sProg3 in the Value section.

References

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, assessed last 15.05.19.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

optimal_multitrial_binary(w = 0.3,         # define parameters for prior
  p0 = 0.6, p11 =  0.3, p12 = 0.5,
  in1 = 30, in2 = 60,                             # (https://web.imbi.uni-heidelberg.de/prior/)
  n2min = 20, n2max = 100, stepn2 = 4,            # define optimization set for n2
  rrgomin = 0.7, rrgomax = 0.9, steprrgo = 0.05,  # define optimization set for RRgo
  alpha = 0.025, beta = 0.1,                      # drug development planning parameters
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,        # fixed and variable costs for phase II/III,
  K = Inf, N = Inf, S = -Inf,                     # set constraints
  b1 = 1000, b2 = 2000, b3 = 3000,                # expected benefit for a each effect size
  case = 1, strategy = TRUE,                      # chose Case and Strategy                                   
  fixed = TRUE,                                   # true treatment effects are fixed/random
  num_cl = 1)                                     # number of cores for parallelized computing
  

Generic function for optimizing multi-trial programs

Description

Generic function for optimizing multi-trial programs

Usage

optimal_multitrial_generic(case, strategy = TRUE)

Arguments

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

strategy

choose strategy: "conduct 1, 2, 3 or 4 trials in order to achieve the case's goal"; TRUE calculates all strategies of the selected case

Effect sizes

In other settings, the definition of "small", "medium" and "large" effect sizes can be user-specified using the input parameters steps1, stepm1 and stepl1. Due to the complexity of the multitrial setting, this feature is not included for this setting. Instead, the effect sizes were set to to predefined values as explained under sProg1, sProg2 and sProg3 in the Value section.


Optimal phase II/III drug development planning where several phase III trials are performed

Description

The optimal_multitrial_normal function enables planning of phase II/III drug development programs with several phase III trials for the same normally distributed endpoint. Its main output values are optimal sample size allocation and go/no-go decision rules. For normally distributed endpoints, the treatment effect is measured by the standardized difference in means (Delta). The assumed true treatment effects can be assumed fixed or modelled by a prior distribution.

Usage

optimal_multitrial_normal(
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  n2min,
  n2max,
  stepn2,
  kappamin,
  kappamax,
  stepkappa,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  b1,
  b2,
  b3,
  case,
  strategy = TRUE,
  fixed = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution

Delta1

assumed true prior treatment effect measured as the standardized difference in means, see here for details

Delta2

assumed true prior treatment effect measured as the standardized difference in means, see here for details

in1

amount of information for Delta1 in terms of sample size, see here for details

in2

amount of information for Delta2 in terms of sample size, see here for details

a

lower boundary for the truncation of the prior distribution

b

upper boundary for the truncation of the prior distribution

n2min

minimal total sample size for phase II; must be an even number

n2max

maximal total sample size for phase II, must be an even number

stepn2

step size for the optimization over n2; must be an even number

kappamin

minimal threshold value kappa for the go/no-go decision rule

kappamax

maximal threshold value kappa for the go/no-go decision rule

stepkappa

step size for the optimization over the threshold value kappa

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the sample size for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

b1

expected gain for effect size category "small" in 10^5 $

b2

expected gain for effect size category "medium" in 10^5 $

b3

expected gain for effect size category "large" in 10^5 $

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

strategy

choose strategy: "conduct 1, 2, 3 or 4 trials in order to achieve the case's goal"; TRUE calculates all strategies of the selected case

fixed

choose if true treatment effects are fixed or following a prior distribution, if TRUE Delta1 is used as fixed effect

num_cl

number of clusters used for parallel computing, default: 1

Details

The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Value

The output of the function is a data.frame object containing the optimization results:

Case

Case: "number of significant trials needed"

Strategy

Strategy: "number of trials to be conducted in order to achieve the goal of the case"

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

Kappa

optimal threshold value for the decision rule to go to phase III

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III (lower boundary in HR scale is set to 0, as proposed by Cohen (1988))

sProg2

probability of a successful program with "medium" treatment effect in phase III (lower boundary in HR scale is set to 0.5, as proposed Cohen (1988))

sProg3

probability of a successful program with "large" treatment effect in phase III (lower boundary in HR scale is set to 0.8, as proposed Cohen (1988))

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

Effect sizes

In other settings, the definition of "small", "medium" and "large" effect sizes can be user-specified using the input parameters steps1, stepm1 and stepl1. Due to the complexity of the multitrial setting, this feature is not included for this setting. Instead, the effect sizes were set to to predefined values as explained under sProg1, sProg2 and sProg3 in the Value section.

References

Cohen, J. (1988). Statistical power analysis for the behavioral sciences.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

optimal_multitrial_normal(w = 0.3,           # define parameters for prior
  Delta1 = 0.375, Delta2 = 0.625,
  in1 = 300, in2 = 600,                               # (https://web.imbi.uni-heidelberg.de/prior/)
  a = 0.25, b = 0.75,
  n2min = 20, n2max = 100, stepn2 = 4,                # define optimization set for n2
  kappamin = 0.02, kappamax = 0.2, stepkappa = 0.02,  # define optimization set for kappa
  alpha = 0.025, beta = 0.1,                          # drug development planning parameters
  c2 = 0.675, c3 = 0.72, c02 = 15, c03 = 20,          # fixed and variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                         # set constraints
  b1 = 3000, b2 = 8000, b3 = 10000,                   # expected benefit for each effect size                                         
  case = 1, strategy = TRUE,                          # chose Case and Strategy
  fixed = TRUE,                                       # true treatment effects are fixed/random
  num_cl = 1)                                         # number of cores for parallelized computing
  


Optimal phase II/III drug development planning with normally distributed endpoint

Description

The function optimal_normal of the drugdevelopR package enables planning of phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules for normally distributed endpoints. The treatment effect is measured by the standardized difference in means. The assumed true treatment effects can be assumed to be fixed or modelled by a prior distribution. The R Shiny application prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

optimal_normal(
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  n2min,
  n2max,
  stepn2,
  kappamin,
  kappamax,
  stepkappa,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 0,
  stepm1 = 0.5,
  stepl1 = 0.8,
  b1,
  b2,
  b3,
  gamma = 0,
  fixed = FALSE,
  skipII = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution

Delta1

assumed true prior treatment effect measured as the standardized difference in means, see here for details

Delta2

assumed true prior treatment effect measured as the standardized difference in means, see here for details

in1

amount of information for Delta1 in terms of sample size, see here for details

in2

amount of information for Delta2 in terms of sample size, see here for details

a

lower boundary for the truncation of the prior distribution

b

upper boundary for the truncation of the prior distribution

n2min

minimal total sample size for phase II; must be an even number

n2max

maximal total sample size for phase II, must be an even number

stepn2

step size for the optimization over n2; must be an even number

kappamin

minimal threshold value kappa for the go/no-go decision rule

kappamax

maximal threshold value kappa for the go/no-go decision rule

stepkappa

step size for the optimization over the threshold value kappa

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the sample size for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small", default: 0

stepm1

lower boundary for effect size category "medium" = upper boundary for effect size category "small" default: 0.5

stepl1

lower boundary for effect size category "large" = upper boundary for effect size category "medium", default: 0.8

b1

expected gain for effect size category "small" in 10^5 $

b2

expected gain for effect size category "medium" in 10^5 $

b3

expected gain for effect size category "large" in 10^5 $

gamma

to model different populations in phase II and III choose gamma != 0, default: 0, see here for details

fixed

choose if true treatment effects are fixed or following a prior distribution, if TRUE Delta1 is used as fixed effect

skipII

choose if skipping phase II is an option, default: FALSE; if TRUE, the program calculates the expected utility for the case when phase II is skipped and compares it to the situation when phase II is not skipped. The results are then returned as a two-row data frame, res[1, ] being the results when including phase II and res[2, ] when skipping phase II. res[2, ] has an additional parameter, res[2, ]$median_prior_Delta, which is the assumed effect size used for planning the phase III study when the phase II is skipped.

num_cl

number of clusters used for parallel computing, default: 1

Value

The output of the function optimal_normal is a data.frame containing the optimization results:

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

Kappa

optimal threshold value for the decision rule to go to phase III

n2

total sample size for phase II

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III

sProg2

probability of a successful program with "medium" treatment effect in phase III

sProg3

probability of a successful program with "large" treatment effect in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters.

Taking cat(comment()) of the data.frame object lists the used optimization sequences, start and finish date of the optimization procedure. Taking attr(,"trace") returns the utility values of all parameter combinations visited during optimization

References

Cohen, J. (1988). Statistical power analysis for the behavioral sciences.

Examples

# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize

optimal_normal(w=0.3,                       # define parameters for prior
  Delta1 = 0.375, Delta2 = 0.625, in1=300, in2=600,  # (https://web.imbi.uni-heidelberg.de/prior/)
  a = 0.25, b = 0.75,
  n2min = 20, n2max = 100, stepn2 = 4,               # define optimization set for n2
  kappamin = 0.02, kappamax = 0.2, stepkappa = 0.02, # define optimization set for kappa
  alpha = 0.025, beta = 0.1,                          # drug development planning parameters
  c2 = 0.675, c3 = 0.72, c02 = 15, c03 = 20,         # fixed/variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                        # set constraints
  steps1 = 0,                                        # define lower boundary for "small"
  stepm1 = 0.5,                                      # "medium"
  stepl1 = 0.8,                                      # and "large" effect size categories
  b1 = 3000, b2 = 8000, b3 = 10000,                  # benefit for each effect size category
  gamma = 0,                                         # population structures in phase II/III
  fixed = FALSE,                                     # true treatment effects are fixed/random
  skipII = FALSE,                                    # skipping phase II
  num_cl = 1)                                        # number of cores for parallelized computing


Generic function for optimizing normally distributed endpoints

Description

Generic function for optimizing normally distributed endpoints

Usage

optimal_normal_generic(
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  n2min,
  n2max,
  stepn2,
  kappamin,
  kappamax,
  stepkappa,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 0,
  stepm1 = 0.5,
  stepl1 = 0.8,
  b1,
  b2,
  b3,
  gamma = 0,
  fixed = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution

Delta1

assumed true prior treatment effect measured as the standardized difference in means, see here for details

Delta2

assumed true prior treatment effect measured as the standardized difference in means, see here for details

in1

amount of information for Delta1 in terms of sample size, see here for details

in2

amount of information for Delta2 in terms of sample size, see here for details

a

lower boundary for the truncation of the prior distribution

b

upper boundary for the truncation of the prior distribution

n2min

minimal total sample size for phase II; must be an even number

n2max

maximal total sample size for phase II, must be an even number

stepn2

step size for the optimization over n2; must be an even number

kappamin

minimal threshold value kappa for the go/no-go decision rule

kappamax

maximal threshold value kappa for the go/no-go decision rule

stepkappa

step size for the optimization over the threshold value kappa

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the sample size for phase III

c2

variable per-patient cost for phase II in 10^5 $

c3

variable per-patient cost for phase III in 10^5 $

c02

fixed cost for phase II in 10^5 $

c03

fixed cost for phase III in 10^5 $

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small", default: 0

stepm1

lower boundary for effect size category "medium" = upper boundary for effect size category "small" default: 0.5

stepl1

lower boundary for effect size category "large" = upper boundary for effect size category "medium", default: 0.8

b1

expected gain for effect size category "small" in 10^5 $

b2

expected gain for effect size category "medium" in 10^5 $

b3

expected gain for effect size category "large" in 10^5 $

gamma

to model different populations in phase II and III choose gamma != 0, default: 0, see here for details

fixed

choose if true treatment effects are fixed or following a prior distribution, if TRUE Delta1 is used as fixed effect

num_cl

number of clusters used for parallel computing, default: 1


Function for generating documentation of return values

Description

Generates the documentation of the return value of an optimal function including some custom text.

Usage

optimal_return_doc(type, setting = "basic")

Arguments

type

string deciding whether this is return text for normal, binary or time-to-event endpoints

setting

string containing the setting, i.e. "basic", "bias", "multitrial"

Value

string containing the documentation of the return value.


Optimal phase II/III drug development planning with time-to-event endpoint

Description

The function optimal_tte of the drugdevelopR package enables planning of phase II/III drug development programs with optimal sample size allocation and go/no-go decision rules for time-to-event endpoints (Kirchner et al., 2016). The assumed true treatment effects can be assumed to be fixed or modelled by a prior distribution. When assuming fixed true treatment effects, planning can also be done with the user-friendly R Shiny app basic. The app prior visualizes the prior distributions used in this package. Fast computing is enabled by parallel programming.

Usage

optimal_tte(
  w,
  hr1,
  hr2,
  id1,
  id2,
  d2min,
  d2max,
  stepd2,
  hrgomin,
  hrgomax,
  stephrgo,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 1,
  stepm1 = 0.95,
  stepl1 = 0.85,
  b1,
  b2,
  b3,
  gamma = 0,
  fixed = FALSE,
  skipII = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution, see this Shiny application for the choice of weights

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

d2min

minimal number of events for phase II

d2max

maximal number of events for phase II

stepd2

step size for the optimization over d2

hrgomin

minimal threshold value for the go/no-go decision rule

hrgomax

maximal threshold value for the go/no-go decision rule

stephrgo

step size for the optimization over HRgo

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the number of events for phase III by Schoenfeld's formula (Schoenfeld 1981)

xi2

assumed event rate for phase II, used for calculating the sample size of phase II via n2 = d2/xi2

xi3

event rate for phase III, used for calculating the sample size of phase III in analogy to xi2

c2

variable per-patient cost for phase II in 10^5 $.

c3

variable per-patient cost for phase III in 10^5 $.

c02

fixed cost for phase II in 10^5 $.

c03

fixed cost for phase III in 10^5 $.

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in HR scale, default: 1

stepm1

lower boundary for effect size category "medium" in HR scale = upper boundary for effect size category "small" in HR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in HR scale = upper boundary for effect size category "medium" in HR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

gamma

to model different populations in phase II and III choose gamma != 0, default: 0

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as a fixed effect and hr2 is ignored

skipII

choose if skipping phase II is an option, default: FALSE; if TRUE, the program calculates the expected utility for the case when phase II is skipped and compares it to the situation when phase II is not skipped. The results are then returned as a two-row data frame, res[1, ] being the results when including phase II and res[2, ] when skipping phase II. res[2, ] has an additional parameter, res[2, ]$median_prior_HR, which is the assumed hazards ratio used for planning the phase III study when the phase II is skipped. It is calculated as the exponential function of the median of the prior function.

num_cl

number of clusters used for parallel computing, default: 1

Format

data.frame containing the optimization results (see Value)

Value

The output of the function is a data.frame object containing the optimization results:

u

maximal expected utility under the optimization constraints, i.e. the expected utility of the optimal sample size and threshold value

HRgo

optimal threshold value for the decision rule to go to phase III

d2

optimal total number of events for phase II

d3

total expected number of events for phase III; rounded to next natural number

d

total expected number of events in the program; d = d2 + d3

n2

total sample size for phase II; rounded to the next even natural number

n3

total sample size for phase III; rounded to the next even natural number

n

total sample size in the program; n = n2 + n3

K

maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)

pgo

probability to go to phase III

sProg

probability of a successful program

sProg1

probability of a successful program with "small" treatment effect in phase III

sProg2

probability of a successful program with "medium" treatment effect in phase III

sProg3

probability of a successful program with "large" treatment effect in phase III

K2

expected costs for phase II

K3

expected costs for phase III

and further input parameters. Taking cat(comment()) of the data frame lists the used optimization sequences, start and finish time of the optimization procedure. The attribute attr(,"trace") returns the utility values of all parameter combinations visited during optimization.

References

Kirchner, M., Kieser, M., Goette, H., & Schueler, A. (2016). Utility-based optimization of phase II/III programs. Statistics in Medicine, 35(2), 305-316.

IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, last access 15.05.19.

Schoenfeld, D. (1981). The asymptotic properties of nonparametric tests for comparing survival distributions. Biometrika, 68(1), 316-319.

See Also

optimal_binary, optimal_normal, optimal_bias, optimal_multitrial and optimal_multiarm

Examples

# Activate progress bar (optional)
## Not run: 
progressr::handlers(global = TRUE)

## End(Not run)
# Optimize

optimal_tte(w = 0.3,                    # define parameters for prior
  hr1 = 0.69, hr2 = 0.88, id1 = 210, id2 = 420,   # (https://web.imbi.uni-heidelberg.de/prior/)
  d2min = 20, d2max = 100, stepd2 = 5,            # define optimization set for d2
  hrgomin = 0.7, hrgomax = 0.9, stephrgo = 0.05,  # define optimization set for HRgo
  alpha = 0.025, beta = 0.1, xi2 = 0.7, xi3 = 0.7, # drug development planning parameters
  c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,        # fixed/variable costs for phase II/III
  K = Inf, N = Inf, S = -Inf,                     # set constraints
  steps1 = 1,                                     # define lower boundary for "small"
  stepm1 = 0.95,                                  # "medium"
  stepl1 = 0.85,                                  # and "large" treatment effect size categories
  b1 = 1000, b2 = 2000, b3 = 3000,                # expected benefit for each effect size category
  gamma = 0,                                      # population structures in phase II/III
  fixed = FALSE,                                  # true treatment effects are fixed/random
  skipII = FALSE,                                 # skipping phase II 
  num_cl = 1)                                     # number of cores for parallelized computing 


Generic function for optimal planning of time-to-event endpoints

Description

Generic function for optimal planning of time-to-event endpoints

Usage

optimal_tte_generic(
  w,
  hr1,
  hr2,
  id1,
  id2,
  d2min,
  d2max,
  stepd2,
  hrgomin,
  hrgomax,
  stephrgo,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K = Inf,
  N = Inf,
  S = -Inf,
  steps1 = 1,
  stepm1 = 0.95,
  stepl1 = 0.85,
  b1,
  b2,
  b3,
  gamma = 0,
  fixed = FALSE,
  num_cl = 1
)

Arguments

w

weight for mixture prior distribution, see this Shiny application for the choice of weights

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

d2min

minimal number of events for phase II

d2max

maximal number of events for phase II

stepd2

step size for the optimization over d2

hrgomin

minimal threshold value for the go/no-go decision rule

hrgomax

maximal threshold value for the go/no-go decision rule

stephrgo

step size for the optimization over HRgo

alpha

one-sided significance level

beta

type II error rate; i.e. 1 - beta is the power for calculation of the number of events for phase III by Schoenfeld's formula (Schoenfeld 1981)

xi2

assumed event rate for phase II, used for calculating the sample size of phase II via n2 = d2/xi2

xi3

event rate for phase III, used for calculating the sample size of phase III in analogy to xi2

c2

variable per-patient cost for phase II in 10^5 $.

c3

variable per-patient cost for phase III in 10^5 $.

c02

fixed cost for phase II in 10^5 $.

c03

fixed cost for phase III in 10^5 $.

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in HR scale, default: 1

stepm1

lower boundary for effect size category "medium" in HR scale = upper boundary for effect size category "small" in HR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in HR scale = upper boundary for effect size category "medium" in HR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

gamma

to model different populations in phase II and III choose gamma != 0, default: 0

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as a fixed effect and hr2 is ignored

num_cl

number of clusters used for parallel computing, default: 1


Probability that endpoint OS significant

Description

This function calculate the probability that the endpoint OS is statistically significant. In the context of cancer research OS stands for overall survival, a positive treatment effect in this endpoints is thus sufficient for a successful program.

Usage

os_tte(HRgo, n2, alpha, beta, hr1, hr2, id1, id2, fixed, rho, rsamp)

Arguments

HRgo

threshold value for the go/no-go decision rule;

n2

total sample size for phase II; must be even number

alpha

one- sided significance level

beta

1-beta power for calculation of the number of events for phase III by Schoenfeld (1981) formula

hr1

assumed true treatment effect on HR scale for endpoint OS

hr2

assumed true treatment effect on HR scale for endpoint PFS

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

fixed

choose if true treatment effects are fixed or random

rho

correlation between the two endpoints

rsamp

sample data set for Monte Carlo integration

Value

The output of the function os_tte() is the probability that endpoint OS significant.


Probability to go to phase III for multiarm programs with binary distributed outcomes

Description

Given our parameters this function calculates the probability to go to phase III after the second phase was conducted. The considered strategies are as follows:

Usage

pgo_binary(RRgo, n2, p0, p11, p12, strategy, case)

Arguments

RRgo

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be even number

p0

assumed true rate of control group

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising")

case

different cases: 1 ("nogo"), 21 (treatment 1 is promising, treatment 2 is not), 22 (treatment 2 is promising, treatment 1 is not), 31 (both treatments are promising, treatment 1 is better), 32 (both treatments are promising, treatment 2 is better)

Value

The function pgo_binary() returns the probability to go to phase III.

Examples

res <- pgo_binary(RRgo = 0.8 ,n2 = 50 ,p0 = 0.6, p11 =  0.3, p12 = 0.5,strategy = 2, case = 31)

Probability to go to phase III for multiple endpoints with normally distributed outcomes

Description

This function calculated the probability that we go to phase III, i.e. that results of phase II are promising enough to get a successful drug development program. Successful means that both endpoints show a statistically significant positive treatment effect in phase III.

Usage

pgo_multiple_normal(
  kappa,
  n2,
  Delta1,
  Delta2,
  in1,
  in2,
  sigma1,
  sigma2,
  fixed,
  rho,
  rsamp
)

Arguments

kappa

threshold value for the go/no-go decision rule; vector for both endpoints

n2

total sample size for phase II; must be even number

Delta1

assumed true treatment effect given as difference in means for endpoint 1

Delta2

assumed true treatment effect given as difference in means for endpoint 2

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

sigma1

standard deviation of first endpoint

sigma2

standard deviation of second endpoint

fixed

choose if true treatment effects are fixed or random, if TRUE Delta1 is used as fixed effect

rho

correlation between the two endpoints

rsamp

sample data set for Monte Carlo integration

Value

The output of the function pgo_multiple_normal() is the probability to go to phase III.


Probability to go to phase III for multiple endpoints in the time-to-event setting

Description

This function calculates the probability that we go to phase III, i.e. that results of phase II are promising enough to get a successful drug development program. Successful means that at least one endpoint shows a statistically significant positive treatment effect in phase III.

Usage

pgo_multiple_tte(HRgo, n2, hr1, hr2, id1, id2, fixed, rho)

Arguments

HRgo

threshold value for the go/no-go decision rule;

n2

total sample size for phase II; must be even number

hr1

assumed true treatment effect on HR scale for endpoint OS

hr2

assumed true treatment effect on HR scale for endpoint PFS

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

fixed

choose if true treatment effects are fixed or random

rho

correlation between the two endpoints

Value

The output of the function pgo_multiple_tte() is the probability to go to phase III.


Probability to go to phase III for multiarm programs with normally distributed outcomes

Description

Given our parameters this function calculates the probability to go to phase III after the second phase was conducted. The considered strategies are as follows:

Usage

pgo_normal(kappa, n2, Delta1, Delta2, strategy, case)

Arguments

kappa

threshold value for the go/no-go decision rule

n2

total sample size for phase II; must be an even number

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising")

case

different cases: 1 ("nogo"), 21 (treatment 1 is promising, treatment 2 is not), 22 (treatment 2 is promising, treatment 1 is not), 31 (both treatments are promising, treatment 1 is better), 32 (both treatments are promising, treatment 2 is better)

Value

The function pgo_normal() returns the probability to go to phase III for multiarm programs with normally distributed outcomes

Examples

res <- pgo_normal(kappa = 0.1, n2 = 50, Delta1 = 0.375, Delta2 = 0.625, strategy = 2, case = 31)

Probability to go to phase III for multiarm programs with time-to-event outcomes

Description

Given our parameters this function calculates the probability to go to phase III after the second phase was conducted. The considered strategies are as follows:

Usage

pgo_tte(HRgo, n2, ec, hr1, hr2, strategy, case)

Arguments

HRgo

threshold value for the go/no-go decision rule

n2

total sample size in phase II, must be divisible by 3

ec

control arm event rate for phase II and III

hr1

assumed true treatment effect on HR scale for treatment 1

hr2

assumed true treatment effect on HR scale for treatment 2

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising")

case

different cases: 1 ("nogo"), 21 (treatment 1 is promising, treatment 2 is not), 22 (treatment 2 is promising, treatment 1 is not), 31 (both treatments are promising, treatment 1 is better), 32 (both treatments are promising, treatment 2 is better)

Value

The function pgo_tte() returns the probability to go to phase III.

Examples

res <- pgo_tte(HRgo = 0.8, n2 = 48 , ec = 0.6, hr1 = 0.7, hr2 = 0.8, strategy = 2, case = 31)

Probability of a successful program, when going to phase III for multiple endpoint with normally distributed outcomes

Description

After getting the "go"-decision to go to phase III, i.e. our results of phase II are over the predefined threshold kappa, this function calculates the probability, that our program is successfull, i.e. that both endpoints show a statistically significant positive treatment effect in phase III.

Usage

posp_normal(
  kappa,
  n2,
  alpha,
  beta,
  Delta1,
  Delta2,
  sigma1,
  sigma2,
  in1,
  in2,
  fixed,
  rho,
  rsamp
)

Arguments

kappa

threshold value for the go/no-go decision rule;

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

Delta1

assumed true treatment effect given as difference in means for endpoint 1

Delta2

assumed true treatment effect given as difference in means for endpoint 2

sigma1

standard deviation of first endpoint

sigma2

standard deviation of second endpoint

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

fixed

choose if true treatment effects are fixed or random, if TRUE Delta1 is used as fixed effect

rho

correlation between the two endpoints

rsamp

sample data set for Monte Carlo integration

Value

The output of the function posp_normal() is the probability of a successful program, when going to phase III.


Printing a drugdevelopResult Object

Description

Displays details about the optimal results from a drugdevelopResult object.

Usage

## S3 method for class 'drugdevelopResult'
print(x, sequence = FALSE, ...)

Arguments

x

Data frame of class drugdevelopResult.

sequence

logical, print optimization sequence (default = FALSE)?

...

Further arguments.

Examples


# Activate progress bar (optional)
## Not run: progressr::handlers(global = TRUE)
# Optimize
res <- optimal_normal(w=0.3,                       
  Delta1 = 0.375, Delta2 = 0.625, in1=300, in2=600,  
  a = 0.25, b = 0.75,
  n2min = 20, n2max = 100, stepn2 = 4,               
  kappamin = 0.02, kappamax = 0.2, stepkappa = 0.02, 
  alpha = 0.025, beta = 0.1,                       
  c2 = 0.675, c3 = 0.72, c02 = 15, c03 = 20,   
  K = Inf, N = Inf, S = -Inf,                        
  steps1 = 0,                                      
  stepm1 = 0.5,                                      
  stepl1 = 0.8,                                      
  b1 = 3000, b2 = 8000, b3 = 10000,                  
  gamma = 0,                                         
  fixed = FALSE,                                    
  skipII = FALSE,                                    
  num_cl = 1)
# Print results
print(res)                                  


Description

Helper function for printing a drugdevelopResult Object

Usage

print_drugdevelopResult_helper(x, ...)

Arguments

x

Data frame

...

Further arguments.

Value

No return value, called for printing to the console using cat()


Prior distribution for time-to-event outcomes

Description

If we do not assume the treatment effects to be fixed, i.e. fixed = FALSE, the function prior_tte allows us to model the treatment effect following a prior distribution. For more details concerning the definition of a prior distribution, see the vignette on priors as well as the Shiny app.

Usage

prior_tte(x, w, hr1, hr2, id1, id2)

Arguments

x

integration variable

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

Value

The output of the functions Epgo_tte() is the expected number of participants in phase III with conservative decision rule and sample size calculation.

Examples

res <- prior_tte(x = 0.5, w = 0.5, hr1 = 0.69, hr2 = 0.88, id1 = 240, id2 = 420)

Probabilty that effect in endpoint one is larger than in endpoint two

Description

This function calculated the probability that the treatment effect in endpoint one (or endpoint x) is larger than in endpoint two (or endpoint y), i.e. P(x>y) = P(x-y>0)

Usage

pw(n2, hr1, hr2, id1, id2, fixed, rho)

Arguments

n2

total sample size for phase II; must be even number

hr1

assumed true treatment effect on HR scale for endpoint OS

hr2

assumed true treatment effect on HR scale for endpoint PFS

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

fixed

choose if true treatment effects are fixed or random

rho

correlation between the two endpoints

Details

Z=X-Y is normally distributed with expectation mu_x - mu_y and variance sigma_x + sigma_y- 2 rho sdx sdy

Value

The output of the function pw() is the probability that endpoint one has a better result than endpoint two


Total sample size for phase III trial with l treatments and equal allocation ratio for binary outcomes

Description

Depending on the results of phase II and our strategy ,i.e. whether we proceed only with the best promising treatment (l = 1) or with all promising treatments (l = 2), this program calculates the number of participants in phase III.

l=1: according to Schoenfeld to guarantee power for the log rank test to detect treatment effect of phase II;

l=2: according to Dunnett to guarantee y any-pair power (Horn & Vollandt)

Usage

ss_binary(alpha, beta, p0, p11, y, l)

Arguments

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

p0

assumed true rate of control group

p11

assumed true rate of treatment group

y

hat_theta_2; estimator in phase II

l

number of treatments in phase III:

  • l=1: according to Schoenfeld to guarantee power for the log rank test to detect treatment effect of phase II;

  • l=2: according to Dunnett to guarantee y any-pair power (Horn & Vollandt)

Value

the function ss_binary() returns the total sample size for phase III trial with l treatments and equal allocation ratio

Examples

res <- ss_binary(alpha = 0.05, beta = 0.1, p0 = 0.6, p11 = 0.3, y = 0.5, l = 1)

Total sample size for phase III trial with l treatments and equal allocation ratio for normally distributed outcomes

Description

Depending on the results of phase II and our strategy ,i.e. whether we proceed only with the best promising treatment (l = 1) or with all promising treatments (l = 2), this program calculates the number of participants in phase III.

l=1: according to Schoenfeld to guarantee power for the log rank test to detect treatment effect of phase II;

l=2: according to Dunnett to guarantee y any-pair power (Horn & Vollandt)

Usage

ss_normal(alpha, beta, y, l)

Arguments

alpha

significance level

beta

1-'beta' power for calculation of sample size for phase III

y

y_hat_theta_2; estimator in phase II

l

number of treatments in phase III:

  • l=1: according to Schoenfeld to guarantee power for the log rank test to detect treatment effect of phase II;

  • l=2: according to Dunnett to guarantee y any-pair power (Horn & Vollandt)

Value

the function ss_normal() returns the total sample size for phase III trial with l treatments and equal allocation ratio

Examples

res <- ss_normal(alpha = 0.05, beta = 0.1, y = 0.5, l = 1)

Total sample size for phase III trial with l treatments and equal allocation ratio for time-to-event outcomes

Description

Depending on the results of phase II and our strategy ,i.e. whether we proceed only with the best promising treatment (l = 1) or with all promising treatments (l = 2), this program calculates the number of participants in phase III.

l=1: according to Schoenfeld to guarantee power for the log rank test to detect treatment effect of phase II;

l=2: according to Dunnett to guarantee y any-pair power (Horn & Vollandt)

Usage

ss_tte(alpha, beta, ec, ek, y, l)

Arguments

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

ec

control arm event rate for phase II and III

ek

event rate of arm k (either arm 1 or arm 2)

y

hat_theta_2; estimator in phase II

l

number of treatments in phase III:

  • l=1: according to Schoenfeld to guarantee power for the log rank test to detect treatment effect of phase II;

  • l=2: according to Dunnett to guarantee y any-pair power (Horn & Vollandt)

Value

the function ss_tte() returns the total sample size for phase III trial with l treatments and equal allocation ratio

Examples

res <- ss_tte(alpha = 0.05, beta = 0.1, ec = 0.6, ek = 0.8, y = 0.5, l=1)

Utility function for multitrial programs deciding between two or three phase III trials in a time-to-event setting

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in further step maximized by the optimal_multitrial() function.

Usage

utility23(
  d2,
  HRgo,
  w,
  hr1,
  hr2,
  id1,
  id2,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  b1,
  b2,
  b3
)

Arguments

d2

total sample size for phase II; must be even number

HRgo

threshold value for the go/no-go decision rule

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

xi2

event rate for phase II

xi3

event rate for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

Value

The output of the function utility23() is the expected utility of the program depending on whether two or three phase III trials are performed.

Examples

utility23(d2 = 50, HRgo = 0.8,  w = 0.3, 
                                 hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 280, id2 = 420, 
                                 alpha = 0.025, beta = 0.1, xi2 = 0.7, xi3 = 0.7,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 b1 = 1000, b2 = 2000, b3 = 3000)

Utility function for multitrial programs deciding between two or three phase III trials for a binary distributed outcome

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in further step maximized by the optimal_multitrial_binary() function.

Usage

utility23_binary(
  n2,
  RRgo,
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  b1,
  b2,
  b3
)

Arguments

n2

total sample size for phase II; must be even number

RRgo

threshold value for the go/no-go decision rule

w

weight for mixture prior distribution

p0

assumed true rate of control group

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

in1

amount of information for p11 in terms of sample size

in2

amount of information for p12 in terms of sample size

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

Value

The output of the function utility23_binary() is the expected utility of the program depending on whether two or three phase III trials are performed.

Examples

utility23_binary(n2 = 50, RRgo = 0.8,  w = 0.3, 
                                 alpha = 0.05, beta = 0.1,
                                 p0 = 0.6, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, 
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 b1 = 1000, b2 = 2000, b3 = 3000)

Utility function for multitrial programs deciding between two or three phase III trials for a normally distributed outcome

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in a further step maximized by the optimal_multitrial_normal() function.

Usage

utility23_normal(
  n2,
  kappa,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  b1,
  b2,
  b3
)

Arguments

n2

total sample size for phase II; must be even number

kappa

threshold value for the go/no-go decision rule

w

weight for mixture prior distribution

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

a

lower boundary for the truncation

b

upper boundary for the truncation

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

Value

The output of the function utility23_normal() is the expected utility of the program depending on whether two or three phase III trials are performed.

Examples

utility23_normal(n2 = 50, kappa = 0.2, w = 0.3, alpha = 0.025, beta = 0.1,
                                Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                                a = 0.25, b = 0.75, 
                                c2 = 0.675, c3 = 0.72, c02 = 15, c03 = 20,
                                b1 = 3000, b2 = 8000, b3 = 10000)

Utility function for bias adjustment programs with time-to-event outcomes.

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in a further step maximized by the optimal_bias() function.

Usage

utility_L(
  d2,
  HRgo,
  Adj,
  w,
  hr1,
  hr2,
  id1,
  id2,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

utility_L2(
  d2,
  HRgo,
  Adj,
  w,
  hr1,
  hr2,
  id1,
  id2,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

utility_R(
  d2,
  HRgo,
  Adj,
  w,
  hr1,
  hr2,
  id1,
  id2,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

utility_R2(
  d2,
  HRgo,
  Adj,
  w,
  hr1,
  hr2,
  id1,
  id2,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

Arguments

d2

total events for phase II; must be even number

HRgo

threshold value for the go/no-go decision rule

Adj

adjustment parameter

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

xi2

event rate for phase II

xi3

event rate for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in RR scale, default: 1

stepm1

lower boundary for effect size category "medium" in RR scale = upper boundary for effect size category "small" in RR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in RR scale = upper boundary for effect size category "medium" in RR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

Value

The output of the functions utility_L(), utility_L2(), utility_R() and utility_R2() is the expected utility of the program.

Examples

res <- utility_L(d2 = 50, HRgo = 0.8, Adj = 0.4, w = 0.3, 
                                 hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 280, id2 = 420, xi2 = 0.7, xi3 = 0.7,
                                 alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 steps1 = 1, stepm1 = 0.95, stepl1 = 0.85,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 fixed = TRUE)
         res <- utility_L2(d2 = 50, HRgo = 0.8, Adj = 0.4, w = 0.3, 
                                 hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 280, id2 = 420, xi2 = 0.7, xi3 = 0.7,
                                 alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 steps1 = 1, stepm1 = 0.95, stepl1 = 0.85,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 fixed = TRUE)
         res <- utility_R(d2 = 50, HRgo = 0.8, Adj = 0.9, w = 0.3, 
                                 hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 280, id2 = 420, xi2 = 0.7, xi3 = 0.7,
                                 alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 steps1 = 1, stepm1 = 0.95, stepl1 = 0.85,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 fixed = TRUE)
         res <- utility_R2(d2 = 50, HRgo = 0.8, Adj = 0.9, w = 0.3, 
                                 hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 280, id2 = 420, xi2 = 0.7, xi3 = 0.7,
                                 alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 steps1 = 1, stepm1 = 0.95, stepl1 = 0.85,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 fixed = TRUE)

Utility function for bias adjustment programs with binary distributed outcomes.

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in a further step maximized by the optimal_bias_binary() function.

Usage

utility_binary_L(
  n2,
  RRgo,
  Adj,
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

utility_binary_L2(
  n2,
  RRgo,
  Adj,
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

utility_binary_R(
  n2,
  RRgo,
  Adj,
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

utility_binary_R2(
  n2,
  RRgo,
  Adj,
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

Arguments

n2

total sample size for phase II; must be even number

RRgo

threshold value for the go/no-go decision rule

Adj

adjustment parameter

w

weight for mixture prior distribution

p0

assumed true rate of control group

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

in1

amount of information for p11 in terms of sample size

in2

amount of information for p12 in terms of sample size

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in RR scale, default: 1

stepm1

lower boundary for effect size category "medium" in RR scale = upper boundary for effect size category "small" in RR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in RR scale = upper boundary for effect size category "medium" in RR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

fixed

choose if true treatment effects are fixed or random, if TRUE p11 is used as fixed effect

Value

The output of the functions utility_binary_L(), utility_binary_L2(), utility_binary_R() and utility_binary_R2() is the expected utility of the program.

Examples

res <- utility_binary_L(n2 = 50, RRgo = 0.8, Adj = 0.1, w = 0.3, 
                                 p0 = 0.6, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, 
                                 alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 steps1 = 1, stepm1 = 0.95, stepl1 = 0.85,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 fixed = TRUE)
         res <- utility_binary_L2(n2 = 50, RRgo = 0.8, Adj = 0.1, w = 0.3, 
                                 p0 = 0.6, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, 
                                 alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 steps1 = 1, stepm1 = 0.95, stepl1 = 0.85,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 fixed = TRUE)
         res <- utility_binary_R(n2 = 50, RRgo = 0.8, Adj = 0.9, w = 0.3, 
                                 p0 = 0.6, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, 
                                 alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 steps1 = 1, stepm1 = 0.95, stepl1 = 0.85,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 fixed = TRUE)
         res <- utility_binary_R2(n2 = 50, RRgo = 0.8, Adj = 0.9, w = 0.3, 
                                 p0 = 0.6, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, 
                                 alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 steps1 = 1, stepm1 = 0.95, stepl1 = 0.85,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 fixed = TRUE)

Utility function for bias adjustment programs with normally distributed outcomes.

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in a further step maximized by the optimal_bias_normal() function.

Usage

utility_normal_L(
  n2,
  kappa,
  Adj,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

utility_normal_L2(
  n2,
  kappa,
  Adj,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

utility_normal_R(
  n2,
  kappa,
  Adj,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

utility_normal_R2(
  n2,
  kappa,
  Adj,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed
)

Arguments

n2

total sample size for phase II; must be even number

kappa

threshold value for the go/no-go decision rule

Adj

adjustment parameter

w

weight for mixture prior distribution

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

a

lower boundary for the truncation

b

upper boundary for the truncation

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small", default: 0

stepm1

lower boundary for effect size category "medium" = upper boundary for effect size category "small" default: 0.5

stepl1

lower boundary for effect size category "large" = upper boundary for effect size category "medium", default: 0.8

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

fixed

choose if true treatment effects are fixed or random, if TRUE Delta1 is used as fixed effect

Value

The output of the functions utility_normal_L(), utility_normal_L2(), utility_normal_R() and utility_normal_R2() is the expected utility of the program.

Examples

res <- utility_normal_L(kappa = 0.1, n2 = 50, Adj = 0, 
                                 alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, 
                                 in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf, 
                                 steps1 = 0, stepm1 = 0.5, stepl1 = 0.8,
                                 b1 = 3000, b2 = 8000, b3 = 10000, 
                                 fixed = TRUE)
          res <- utility_normal_L2(kappa = 0.1, n2 = 50, Adj = 0, 
                                 alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, 
                                 in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf, 
                                 steps1 = 0, stepm1 = 0.5, stepl1 = 0.8,
                                 b1 = 3000, b2 = 8000, b3 = 10000, 
                                 fixed = TRUE)
          res <- utility_normal_R(kappa = 0.1, n2 = 50, Adj = 1, 
                                 alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, 
                                 in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf, 
                                 steps1 = 0, stepm1 = 0.5, stepl1 = 0.8,
                                 b1 = 3000, b2 = 8000, b3 = 10000, 
                                 fixed = TRUE)
          res <- utility_normal_R2(kappa = 0.1, n2 = 50, Adj = 1, 
                                 alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, 
                                 in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf, 
                                 steps1 = 0, stepm1 = 0.5, stepl1 = 0.8,
                                 b1 = 3000, b2 = 8000, b3 = 10000, 
                                 fixed = TRUE)

Utility function for multiarm programs with time-to-event outcomes

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters as on the sample size and expected probability of a successful program. The utility is in further step maximized by the optimal_multiarm() function.

Usage

utility_multiarm(
  n2,
  HRgo,
  alpha,
  beta,
  hr1,
  hr2,
  strategy,
  ec,
  c2,
  c02,
  c3,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3
)

Arguments

n2

total sample size for phase II; must be divisible by three

HRgo

threshold value for the go/no-go decision rule

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

hr1

assumed true treatment effect on HR scale for treatment 1

hr2

assumed true treatment effect on HR scale for treatment 2

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising")

ec

control arm event rate for phase II and III

c2

variable per-patient cost for phase II

c02

fixed cost for phase II

c3

variable per-patient cost for phase III

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in HR scale, default: 1

stepm1

lower boundary for effect size category "medium" in HR scale = upper boundary for effect size category "small" in RR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in HR scale = upper boundary for effect size category "medium" in RR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

Value

The output of the function utility_multiarm() is the expected utility of the program

Examples

utility_multiarm(n2 = 50, HRgo = 0.8, alpha = 0.05, beta = 0.1,
                            hr1 = 0.7, hr2 = 0.8, strategy = 2, ec = 0.6,
                            c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                            K = Inf, N = Inf, S = -Inf,  
                            steps1 = 1, stepm1 = 0.95,  stepl1 = 0.85,
                            b1 = 1000, b2 = 2000, b3 = 3000)

Utility function for multiarm programs with binary distributed outcomes

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters as on the sample size and expected probability of a successful program. The utility is in further step maximized by the optimal_multiarm_binary() function.

Usage

utility_multiarm_binary(
  n2,
  RRgo,
  alpha,
  beta,
  p0 = p0,
  p11 = p11,
  p12 = p12,
  strategy,
  c2,
  c02,
  c3,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3
)

Arguments

n2

total sample size for phase II; must be even number

RRgo

threshold value for the go/no-go decision rule

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

p0

assumed true rate of control group

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising")

c2

variable per-patient cost for phase II

c02

fixed cost for phase II

c3

variable per-patient cost for phase III

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in RR scale, default: 1

stepm1

lower boundary for effect size category "medium" in RR scale = upper boundary for effect size category "small" in RR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in RR scale = upper boundary for effect size category "medium" in RR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

Value

The output of the function utility_multiarm_binary() is the expected utility of the program

Examples

res <- utility_multiarm_binary(n2 = 50, RRgo = 0.8, alpha = 0.05, beta = 0.1,
                            p0 = 0.6, p11 =  0.3, p12 = 0.5, strategy = 1,
                            c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                            K = Inf, N = Inf, S = -Inf,  
                            steps1 = 1, stepm1 = 0.95,   stepl1 = 0.85,
                            b1 = 1000, b2 = 2000, b3 = 3000)

Utility function for multiarm programs with normally distributed outcomes

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters as on the sample size and expected probability of a successful program. The utility is in further step maximized by the optimal_multiarm_normal() function.

Usage

utility_multiarm_normal(
  n2,
  kappa,
  alpha,
  beta,
  Delta1,
  Delta2,
  strategy,
  c2,
  c02,
  c3,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3
)

Arguments

n2

total sample size for phase II; must be even number

kappa

threshold value for the go/no-go decision rule

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

strategy

choose Strategy: 1 ("only best promising"), 2 ("all promising")

c2

variable per-patient cost for phase II

c02

fixed cost for phase II

c3

variable per-patient cost for phase III

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small", default: 0

stepm1

lower boundary for effect size category "medium" = upper boundary for effect size category "small" default: 0.5

stepl1

lower boundary for effect size category "large" = upper boundary for effect size category "medium", default: 0.8

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

Value

The output of the function utility_multiarm_normal() is the expected utility of the program.

Examples

res <- utility_multiarm_normal(n2 = 50, kappa = 0.8, alpha = 0.05, beta = 0.1,
                            Delta1 = 0.375, Delta2 = 0.625, strategy = 1,
                            c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                            K = Inf, N = Inf, S = -Inf,  
                            steps1 = 0, stepm1 = 0.5,   stepl1 = 0.8,
                            b1 = 1000, b2 = 2000, b3 = 3000)

Utility function for multiple endpoints with normally distributed outcomes.

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in a further step maximized by the optimal_multiple_normal() function.

Usage

utility_multiple_normal(
  kappa,
  n2,
  alpha,
  beta,
  Delta1,
  Delta2,
  in1,
  in2,
  sigma1,
  sigma2,
  c2,
  c02,
  c3,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  fixed,
  rho,
  relaxed,
  rsamp
)

Arguments

kappa

threshold value for the go/no-go decision rule; vector for both endpoints

n2

total sample size for phase II; must be even number

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

Delta1

assumed true treatment effect given as difference in means for endpoint 1

Delta2

assumed true treatment effect given as difference in means for endpoint 2

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

sigma1

standard deviation of first endpoint

sigma2

standard deviation of second endpoint

c2

variable per-patient cost for phase II

c02

fixed cost for phase II

c3

variable per-patient cost for phase III

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in HR scale, default: 1

stepm1

lower boundary for effect size category "medium" in HR scale = upper boundary for effect size category "small" in HR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in HR scale = upper boundary for effect size category "medium" in HR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

fixed

choose if true treatment effects are fixed or random, if TRUE Delta1 is used as fixed effect

rho

correlation between the two endpoints

relaxed

relaxed or strict decision rule

Value

The output of the function utility_multiple_normal() is the expected utility of the program.


Utility function for multiple endpoints in a time-to-event-setting

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in a further step maximized by the optimal_multiple_tte() function. Note, that for calculating the utility of the program, two different benefit triples are necessary:

Usage

utility_multiple_tte(
  n2,
  HRgo,
  alpha,
  beta,
  hr1,
  hr2,
  id1,
  id2,
  c2,
  c02,
  c3,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b11,
  b21,
  b31,
  b12,
  b22,
  b32,
  fixed,
  rho,
  rsamp
)

Arguments

n2

total sample size for phase II; must be even number

HRgo

threshold value for the go/no-go decision rule;

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

hr1

assumed true treatment effect on HR scale for endpoint OS

hr2

assumed true treatment effect on HR scale for endpoint PFS

id1

amount of information for hr1 in terms of sample size

id2

amount of information for hr2 in terms of sample size

c2

variable per-patient cost for phase II

c02

fixed cost for phase II

c3

variable per-patient cost for phase III

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in HR scale, default: 1

stepm1

lower boundary for effect size category "medium" in HR scale = upper boundary for effect size category "small" in HR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in HR scale = upper boundary for effect size category "medium" in HR scale, default: 0.85

b11

expected gain for effect size category "small" if endpoint OS is significant

b21

expected gain for effect size category "medium"if endpoint OS is significant

b31

expected gain for effect size category "large" if endpoint OS is significant

b12

expected gain for effect size category "small" if endpoint OS is not significant

b22

expected gain for effect size category "medium"if endpoint OS is not significant

b32

expected gain for effect size category "large" if endpoint OS is not significant

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

rho

correlation between the two endpoints

rsamp

sample data set for Monte Carlo integration

Value

The output of the function utility_multiple_tte() is the expected utility of the program.


Utility function for multitrial programs in a time-to-event setting

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in further step maximized by the optimal_multitrial() function.

Usage

utility2(
  d2,
  HRgo,
  w,
  hr1,
  hr2,
  id1,
  id2,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  case,
  fixed
)

utility3(
  d2,
  HRgo,
  w,
  hr1,
  hr2,
  id1,
  id2,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  case,
  fixed
)

utility4(
  d2,
  HRgo,
  w,
  hr1,
  hr2,
  id1,
  id2,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  case,
  fixed
)

Arguments

d2

total number of events in phase II

HRgo

threshold value for the go/no-go decision rule

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

xi2

event rate for phase II

xi3

event rate for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

fixed

choose if true treatment effects are fixed or random

Value

The output of the functions utility2(), utility3() and utility4() is the expected utility of the program when 2, 3 or 4 phase III trials are performed.

Examples

res <- utility2(d2 = 50, HRgo = 0.8,  w = 0.3, 
                                 hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 210, id2 = 420, 
                                 alpha = 0.025, beta = 0.1, xi2 = 0.7, xi3 = 0.7,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 case = 2, fixed = TRUE)
          res <- utility3(d2 = 50, HRgo = 0.8,  w = 0.3, 
                                 hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 210, id2 = 420, 
                                 alpha = 0.025, beta = 0.1, xi2 = 0.7, xi3 = 0.7,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 case = 2, fixed = TRUE)
         res <- utility4(d2 = 50, HRgo = 0.8,  w = 0.3, 
                                 hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 210, id2 = 420, 
                                 alpha = 0.025, beta = 0.1, xi2 = 0.7, xi3 = 0.7,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 case = 3, fixed = TRUE)

Utility function for multitrial programs with binary distributed outcomes

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in further step maximized by the optimal_multitrial_binary() function.

Usage

utility2_binary(
  n2,
  RRgo,
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  case,
  fixed
)

utility3_binary(
  n2,
  RRgo,
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  case,
  fixed
)

utility4_binary(
  n2,
  RRgo,
  w,
  p0,
  p11,
  p12,
  in1,
  in2,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  case,
  fixed
)

Arguments

n2

total sample size for phase II; must be even number

RRgo

threshold value for the go/no-go decision rule

w

weight for mixture prior distribution

p0

assumed true rate of control group

p11

assumed true rate of treatment group

p12

assumed true rate of treatment group

in1

amount of information for p11 in terms of sample size

in2

amount of information for p12 in terms of sample size

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

fixed

choose if true treatment effects are fixed or random

Value

The output of the functions utility2_binary(), utility3_binary() and utility4_binary() is the expected utility of the program when 2, 3 or 4 phase III trials are performed.

Examples

res <- utility2_binary(n2 = 50, RRgo = 0.8,  w = 0.3, 
                                 p0 = 0.6, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 case = 2, fixed = TRUE)
          res <- utility3_binary(n2 = 50, RRgo = 0.8,  w = 0.3, 
                                 p0 = 0.6, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 case = 2, fixed = TRUE)
         res <- utility4_binary(n2 = 50, RRgo = 0.8,  w = 0.3, 
                                 p0 = 0.6, p11 =  0.3, p12 = 0.5, 
                                 in1 = 300, in2 = 600, alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 case = 3, fixed = TRUE)

Utility function for multitrial programs with normally distributed outcomes

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in a further step maximized by the optimal_multitrial_normal() function.

Usage

utility2_normal(
  n2,
  kappa,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  case,
  fixed
)

utility3_normal(
  n2,
  kappa,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  case,
  fixed
)

utility4_normal(
  n2,
  kappa,
  w,
  Delta1,
  Delta2,
  in1,
  in2,
  a,
  b,
  alpha,
  beta,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  b1,
  b2,
  b3,
  case,
  fixed
)

Arguments

n2

total sample size for phase II; must be even number

kappa

threshold value for the go/no-go decision rule

w

weight for mixture prior distribution

Delta1

assumed true treatment effect for standardized difference in means

Delta2

assumed true treatment effect for standardized difference in means

in1

amount of information for Delta1 in terms of sample size

in2

amount of information for Delta2 in terms of sample size

a

lower boundary for the truncation

b

upper boundary for the truncation

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

case

choose case: "at least 1, 2 or 3 significant trials needed for approval"

fixed

choose if true treatment effects are fixed or random

Value

The output of the functions utility2_normal(), utility3_normal() and utility4_normal() is the expected utility of the program when 2, 3 or 4 phase III trials are performed.

Examples

res <- utility2_normal(kappa = 0.1, n2 = 50,  alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 c2 = 0.675, c3 = 0.72, c02 = 15, c03 = 20,
                                 K = Inf, N = Inf, S = -Inf, 
                                 b1 = 3000, b2 = 8000, b3 = 10000, 
                                 case = 2, fixed = TRUE)
          res <- utility3_normal(kappa = 0.1, n2 = 50,  alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 c2 = 0.675, c3 = 0.72, c02 = 15, c03 = 20,
                                 K = Inf, N = Inf, S = -Inf,
                                 b1 = 3000, b2 = 8000, b3 = 10000, 
                                 case = 2, fixed = TRUE)                        
          res <- utility4_normal(kappa = 0.1, n2 = 50,  alpha = 0.025, beta = 0.1, w = 0.3,
                                 Delta1 = 0.375, Delta2 = 0.625, in1 = 300, in2 = 600, 
                                 a = 0.25, b = 0.75, 
                                 c2 = 0.675, c3 = 0.72, c02 = 15, c03 = 20,
                                 K = Inf, N = Inf, S = -Inf, 
                                 b1 = 3000, b2 = 8000, b3 = 10000, 
                                 case = 3, fixed = TRUE)

Utility function for time-to-event outcomes.

Description

The utility function calculates the expected utility of our drug development program and is given as gains minus costs and depends on the parameters and the expected probability of a successful program. The utility is in a further step maximized by the optimal_tte() function.

Usage

utility_tte(
  d2,
  HRgo,
  w,
  hr1,
  hr2,
  id1,
  id2,
  alpha,
  beta,
  xi2,
  xi3,
  c2,
  c3,
  c02,
  c03,
  K,
  N,
  S,
  steps1,
  stepm1,
  stepl1,
  b1,
  b2,
  b3,
  gamma,
  fixed
)

Arguments

d2

total events for phase II; must be even number

HRgo

threshold value for the go/no-go decision rule

w

weight for mixture prior distribution

hr1

first assumed true treatment effect on HR scale for prior distribution

hr2

second assumed true treatment effect on HR scale for prior distribution

id1

amount of information for hr1 in terms of number of events

id2

amount of information for hr2 in terms of number of events

alpha

significance level

beta

1-beta power for calculation of sample size for phase III

xi2

event rate for phase II

xi3

event rate for phase III

c2

variable per-patient cost for phase II

c3

variable per-patient cost for phase III

c02

fixed cost for phase II

c03

fixed cost for phase III

K

constraint on the costs of the program, default: Inf, e.g. no constraint

N

constraint on the total expected sample size of the program, default: Inf, e.g. no constraint

S

constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint

steps1

lower boundary for effect size category "small" in RR scale, default: 1

stepm1

lower boundary for effect size category "medium" in RR scale = upper boundary for effect size category "small" in RR scale, default: 0.95

stepl1

lower boundary for effect size category "large" in RR scale = upper boundary for effect size category "medium" in RR scale, default: 0.85

b1

expected gain for effect size category "small"

b2

expected gain for effect size category "medium"

b3

expected gain for effect size category "large"

gamma

difference in treatment effect due to different population structures in phase II and III

fixed

choose if true treatment effects are fixed or random, if TRUE hr1 is used as fixed effect

Value

The output of the functions utility_tte() is the expected utility of the program.

Examples

res <- utility_tte(d2 = 50, HRgo = 0.8, w = 0.3, 
                                 hr1 =  0.69, hr2 = 0.81, 
                                 id1 = 280, id2 = 420, xi2 = 0.7, xi3 = 0.7,
                                 alpha = 0.025, beta = 0.1,
                                 c2 = 0.75, c3 = 1, c02 = 100, c03 = 150,
                                 K = Inf, N = Inf, S = -Inf,
                                 steps1 = 1, stepm1 = 0.95, stepl1 = 0.85,
                                 b1 = 1000, b2 = 2000, b3 = 3000, 
                                 gamma = 0, fixed = TRUE)