OmiEmbed: reconstruct comprehensive phenotypic information from multi-omics data using multi-task deep learning
High-dimensional omics data contains intrinsic biomedical information that is crucial for personalised medicine. Nevertheless, it is challenging to capture them from the genome-wide data due to the large number of molecular features and small number of available samples, which is also called "the curse of dimensionality" in machine learning. To tackle this problem and pave the way for machine learning aided precision medicine, we proposed a unified multi-task deep learning framework called OmiEmbed to capture a holistic and relatively precise profile of phenotype from high-dimensional omics data. The deep embedding module of OmiEmbed learnt an omics embedding that mapped multiple omics data types into a latent space with lower dimensionality. Based on the new representation of multi-omics data, different downstream networks of OmiEmbed were trained together with the multi-task strategy to predict the comprehensive phenotype profile of each sample. We trained the model on two publicly available omics datasets to evaluate the performance of OmiEmbed. The OmiEmbed model achieved promising results for multiple downstream tasks including dimensionality reduction, tumour type classification, multi-omics integration, demographic and clinical feature reconstruction, and survival prediction. Instead of training and applying different downstream networks separately, the multi-task strategy combined them together and conducted multiple tasks simultaneously and efficiently. The model achieved better performance with the multi-task strategy comparing to training them individually. OmiEmbed is a powerful tool to accurately capture comprehensive phenotypic information from high-dimensional omics data and has a great potential to facilitate more accurate and personalised clinical decision making.
READ FULL TEXT