Multimodal and multicontrast image fusion via deep generative models

03/28/2023
by   Giovanna Maria Dimitri, et al.
0

Recently, it has become progressively more evident that classic diagnostic labels are unable to reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses (e.g., depression, anxiety disorders, behavioral phenotypes). Patient heterogeneity can be better described by grouping individuals into novel categories based on empirically derived sections of intersecting continua that span across and beyond traditional categorical borders. In this context, neuroimaging data carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is because every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges. In this paper we design a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized Human Connectome Project database demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to different phenotypical information which was not included in the embedding creation process. This may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and empowering clinical trials.

READ FULL TEXT

page 4

page 12

page 14

page 15

page 16

page 17

research
11/02/2020

Predicting Brain Degeneration with a Multimodal Siamese Neural Network

To study neurodegenerative diseases, longitudinal studies are carried on...
research
01/14/2021

Joint Dimensionality Reduction for Separable Embedding Estimation

Low-dimensional embeddings for data from disparate sources play critical...
research
11/07/2016

Joint Multimodal Learning with Deep Generative Models

We investigate deep generative models that can exchange multiple modalit...
research
10/09/2021

Discriminative Multimodal Learning via Conditional Priors in Generative Models

Deep generative models with latent variables have been used lately to le...
research
06/01/2023

A Transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics

During the diagnostic process, clinicians leverage multimodal informatio...

Please sign up or login with your details

Forgot password? Click here to reset