Multimodal LLMs for health grounded in individual-specific data

by   Anastasiya Belyaeva, et al.

Foundation large language models (LLMs) have shown an impressive ability to solve tasks across a wide range of fields including health. To effectively solve personalized health tasks, LLMs need the ability to ingest a diversity of data modalities that are relevant to an individual's health status. In this paper, we take a step towards creating multimodal LLMs for health that are grounded in individual-specific data by developing a framework (HeLM: Health Large Language Model for Multimodal Understanding) that enables LLMs to use high-dimensional clinical modalities to estimate underlying disease risk. HeLM encodes complex data modalities by learning an encoder that maps them into the LLM's token embedding space and for simple modalities like tabular data by serializing the data into text. Using data from the UK Biobank, we show that HeLM can effectively use demographic and clinical features in addition to high-dimensional time-series data to estimate disease risk. For example, HeLM achieves an AUROC of 0.75 for asthma prediction when combining tabular and spirogram data modalities compared with 0.49 when only using tabular data. Overall, we find that HeLM outperforms or performs at parity with classical machine learning approaches across a selection of eight binary traits. Furthermore, we investigate the downstream uses of this model such as its generalizability to out-of-distribution traits and its ability to power conversations around individual health and wellness.


page 24

page 25


Large Language Models are Few-Shot Health Learners

Large language models (LLMs) can capture rich representations of concept...

What Makes Multimodal Learning Better than Single (Provably)

The world provides us with data of multiple modalities. Intuitively, mod...

UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced Data

Successful health risk prediction demands accuracy and reliability of th...

Label Dependent Attention Model for Disease Risk Prediction Using Multimodal Electronic Health Records

Disease risk prediction has attracted increasing attention in the field ...

Meta-Transformer: A Unified Framework for Multimodal Learning

Multimodal learning aims to build models that can process and relate inf...

Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization

There has been an increased interest in multimodal language processing i...

Presentation and Analysis of a Multimodal Dataset for Grounded LanguageLearning

Grounded language acquisition – learning how language-based interactions...

Please sign up or login with your details

Forgot password? Click here to reset