PhD Proposal by Edward Choi

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Monday November 27, 2017 - Tuesday November 28, 2017
      12:00 pm - 1:59 pm
  • Location: Klaus 1202
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Doctor AI: Interpretable Deep Learning for Modeling Electronic Health Records

Full Summary: No summary paragraph submitted.

 

Title:

Doctor AI: Interpretable Deep Learning for Modeling Electronic Health Records

 

Edward Choi

Ph.D. Student

School of Computational Science & Engineering

College of Computing

Georgia Institute of Technology

 

Date: Monday, November 27 2017

Time: 12:00pm-2:00pm (EDT)

Location: Klaus 1202

 

Committee:

Dr. Jimeng Sun (Advisor, School of Computational Science & Engineering, Georgia Institute of Technology)

Dr. James Rehg (School of Interactive Computing, Georgia Institute of Technology)

Dr. Jon Duke (School of Computational Science & Engineering, Georgia Institute of Technology)

Dr. Jacob Eisenstein (School of Interactive Computing, Georgia Institute of Technology)

 

Abstract:

Deep learning recently has been showing superior performance in complex domains such as computer vision, audio processing and natural language processing compared to traditional statistical methods. Naturally, deep learning techniques, combined with large electronic health records (EHR) data generated from healthcare organizations has potential to bring dramatic changes to the healthcare industry. However, typical deep learning models can be seen as a highly expressive blackboxes, but difficult to be adopted in real-world healthcare applications due to the lack of interpretability. In order for deep learning methods to be readily adopted by real-world clinical practices, deep learning models must be interpretable without sacrificing its prediction accuracy.

In this thesis, we propose interpretable and accurate deep learning methods for modeling EHR, specifically focusing on longitudinal EHR data. We will begin with a direct application of a well-known deep learning algorithm, the recurrent neural networks (RNN), to capture the temporal nature of longitudinal EHR. Then, based on the initial approach we develop interpretable deep learning models by focusing on three aspects of computational healthcare: efficient representation learning of medical concepts, code-level interpretation for sequence predictions, and leveraging domain knowledge into the model. Another important aspect that we will address as our future work is to extend the interpretable deep learning framework to incorporate multiple data modalities such as lab measures and clinical text.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Nov 28, 2017 - 8:02am
  • Last Updated: Nov 28, 2017 - 8:02am