Boosting hazard regression with time-varying covariates

01/27/2017
by   Donald K. K. Lee, et al.
0

Consider a left-truncated right-censored survival process whose evolution depends on time-varying covariates. Given functional data samples from the process, we propose a gradient boosting procedure for estimating its log-intensity function in a flexible manner to capture time-covariate interactions. The estimator is shown to be consistent if the model is correctly specified. Alternatively an oracle inequality can be demonstrated for tree-based models. We use the procedure to shed new light on a question from the operations literature concerning the effect of workload on service rates in an emergency department. To avoid overfitting, boosting employs several regularization devices. One of them is step-size restriction, but the rationale for this is somewhat mysterious from the viewpoint of consistency: In theoretical treatments of classification and regression problems, unrestricted greedy step-sizes appear to suffice. Given that the partial log-likelihood functional for hazard regression has unbounded curvature, our study suggests that step-size restriction might be a mechanism for preventing the curvature of the risk from derailing convergence.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro