A Contextual-bandit-based Approach for Informed Decision-making in Clinical Trials

by   Yogatheesan Varatharajah, et al.

Clinical trials involving multiple treatments utilize randomization of the treatment assignments to enable the evaluation of treatment efficacies in an unbiased manner. Such evaluation is performed in post hoc studies that usually use supervised-learning methods that rely on large amounts of data collected in a randomized fashion. That approach often proves to be suboptimal in that some participants may suffer and even die as a result of having not received the most appropriate treatments during the trial. Reinforcement-learning methods improve the situation by making it possible to learn the treatment efficacies dynamically during the course of the trial, and to adapt treatment assignments accordingly. Recent efforts using multi-arm bandits, a type of reinforcement-learning methods, have focused on maximizing clinical outcomes for a population that was assumed to be homogeneous. However, those approaches have failed to account for the variability among participants that is becoming increasingly evident as a result of recent clinical-trial-based studies. We present a contextual-bandit-based online treatment optimization algorithm that, in choosing treatments for new participants in the study, takes into account not only the maximization of the clinical outcomes but also the patient characteristics. We evaluated our algorithm using a real clinical trial dataset from the International Stroke Trial. The results of our retrospective analysis indicate that the proposed approach performs significantly better than either a random assignment of treatments (the current gold standard) or a multi-arm-bandit-based approach, providing substantial gains in the percentage of participants who are assigned the most suitable treatments. The contextual-bandit and multi-arm bandit approaches provide 72.63 gains, respectively, compared to a random assignment.


Adaptive treatment allocation and selection in multi-arm clinical trials: a Bayesian perspective

Clinical trials are an instrument for making informed decisions based on...

Making SMART decisions in prophylaxis and treatment studies

The optimal prophylaxis, and treatment if the prophylaxis fails, for a d...

A Biologically Plausible Benchmark for Contextual Bandit Algorithms in Precision Oncology Using in vitro Data

Precision oncology, the genetic sequencing of tumors to identify druggab...

A Deep Bayesian Bandits Approach for Anticancer Therapy: Exploration via Functional Prior

Learning personalized cancer treatment with machine learning holds great...

Anscombe's Model for Sequential Clinical Trials Revisited

In Anscombe's classical model, the objective is to find the optimal sequ...

Selectively Contextual Bandits

Contextual bandits are widely used in industrial personalization systems...

SMART-EXAM: Incorporating Participants' Welfare into Sequential Multiple Assignment Randomized Trials

Dynamic treatment regimes (DTRs) are sequences of decision rules that re...

Please sign up or login with your details

Forgot password? Click here to reset