*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Li Tong
PhD Proposal Presentation
Date: April 30, 2019
Time: 10:30 am
Location: Conference room 2110, 2nd floor of Whitaker Building
Committee Members:
May D. Wang, Ph.D. (Advisor)
Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University
Omer T. Inan, Ph.D.
School of Electrical and Computer Engineering, Georgia Institute of Technology
Wei Sun, PhD
Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University
Nikhil K Chanani, MD
Department of Pediatrics, Emory University School of Medicine
Shriprasad Deshpande, MD
Children’s National Health System, George Washington University, Washington DC
Title: Enable precision medicine by integrating multi-modal biomedical data using consensus neural networks
Abstract:
With the advancement of technologies including high-throughput sequencing and wearable devices, a massive amount of multi-modal biomedical data has been generated. However, the multi-modal biomedical data has not been fully utilized yet in clinical practice. Besides data harmonization for the irregular and heterogeneous data and the curse of dimensionality, one major challenge when utilizing multi-modal biomedical data is how to model the complex interactions between modalities. For modalities with close interactions (e.g., multi-omics data), variables from each data modality are connected by either association or causal relationships, which are proposed to be modeled by consensus neural networks. The consensus regularization requires the features encoded from various modalities of the same subject to be accord with each other in the common feature space. By imposing the consensus constraint on the feature representation, the models are expected to learn from not only the classification labels but also from all the data modalities available. For modalities with fewer connections (e.g., genomic data and pathological images), the heterogeneous information is proposed to be captured by concatenating the independently encoded features so that they can jointly contribute to the final decision. By integrating multi-modal biomedical data with deep neural networks, the healthcare quality is expected to be improved with a more comprehensive evaluation of the patient.