*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Charles Anthony Ellis
BME PhD Proposal Presentation
Date:2022-06-16
Time: 12 pm ET
Location / Meeting Link: https://emory.zoom.us/j/98160922337
Committee Members:
Vince D. Calhoun, PhD (advisor) Gari D. Clifford, DPhil May D. Wang, PhD Sergey M. Plis, PhD Robyn L. Miller, PhD
Title: Novel Explainability Approaches for Analyzing Multimodal Neuroimaging Data with Supervised and Unsupervised Machine Learning
Abstract:
In recent years, machine learning methods have played an increasingly common role in multimodal neuroimaging studies involving functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and multimodal data. These methods have the potential to contribute to the identification of subtypes of neurological and neuropsychiatric diseases, to the identification of novel diagnostic biomarkers, and to the eventual development of diagnostic and prognostic clinical decision support systems. However, the majority of existing applications of these methods to multimodal neuroimaging data have not employed any form of explainability, which greatly limits their ability to reach their full clinical and research potential. In a clinical setting, clinicians have an ethical imperative to explain their recommendations to patients and show a hesitation to use intelligent systems that limit that ability. In a research setting, it will be difficult for new discoveries to be made with unsupervised machine learning and supervised machine learning if they are not paired with explainability. The current lack of explainability for machine learning and deep learning analyses in neuroimaging is largely attributable to the lack of explainability methods that have been uniquely developed for neuroimaging data analysis. In this thesis proposal, we propose the development of a series of novel explainability methods that can be applied to unsupervised clustering algorithms and supervised deep learning classifiers and that are uniquely adapted to the context of fMRI, EEG, and multimodal neuroimaging analysis. We demonstrate the utility of our methods within the context of sleep stage classification, schizophrenia, and Alzheimer’s disease. We hope that these methods will enable machine learning and deep learning methods to have a more widespread positive impact upon clinical and biomedical neuroimaging research through the facilitation of new discoveries and more broadly within clinical neuroimaging by facilitating the creation of novel diagnostic and prognostic clinical decision support systems.