*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Developing A Novel Responsible Artificial Intelligence Framework to Advance Precision Medicine
Committee:
Dr. M. Wang, Advisor
Dr. Heck, Chair
Dr. Kumar
Dr. Wattenbarger
Dr. Iwinski
Abstract: The objective of this proposed research is to develop a precision medicine pipeline using responsible artificial intelligence (AI) to facilitate clinical translational research. Precision medicine aims to provide patients with individualized healthcare, including targeted prevention, precise diagnosis, personalized treatments, and accurate prognosis. Using large-scale and multi-modal biomedical data that captures individual variability, AI can improve healthcare for each individual by identifying the most appropriate medical decisions. A standard data analysis pipeline includes data collection, quality control, feature extraction, knowledge modeling, decision making, and post-deployment considerations. Despite the abundance of peer-reviewed papers demonstrating novel AI-enabled solutions for precision medicine, few have had a significant clinical impact in terms of translating scientific research into practice and benefiting patients and clinicians. First, the extraction of information and knowledge from multi-site and multi-modal clinical data remains a significant challenge in biomedical research. Lack of explainability is another major barrier to the widespread adoption of AI-enabled clinical decision support systems in real-world settings. For instance, clinicians continue to struggle to comprehend the decisions made by black-box models in precision medicine, where experts require substantially more information than a simple binary prediction to support their diagnosis. In addition, the deployment of machine learning algorithms in healthcare raises multiple ethical concerns, particularly when models have the potential to amplify existing health inequities. Responsible AI is concerned with the development, implementation, and utilization of ethical, transparent, and accountable AI technology with the goal of reducing biases, promoting fairness and equality, and facilitating the interpretability and explainability of outcomes, which is especially important in the healthcare context. Therefore, I aim to promote translational research through responsible AI by (1) developing multi-site and multi-modal data integration methods for knowledge extraction from heterogeneous clinical data (Aim 1), (2) presenting explainable AI algorithms for model interpretation in clinical decision support systems to gain clinical trust (Aim 2), and (3) proposing a translational framework for precision medicine leveraging responsible AI towards accountability, trustworthiness, and fairness in real-world clinical practice (Aim 3). By leveraging the advancement of multi-modal biomedical data in precision medicine, the proposed framework is expected to be one of the first responsible AI pipelines in translational clinical research to promote the adoption of AI in real-world practice.