*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Explaining Model Decisions and Correcting them via Human Feedback
Ramprasaath R. Selvaraju
Ph.D. Candidate in Computer Science
School of Interactive Computing
Georgia Institute of Technology
Date: Monday, March 23rd, 2020
Time: 12:00-2:00 PM (EST)
BlueJeans: https://bluejeans.com/4346160082
**Note: this defense is remote-only due to the institute's guidelines on COVID-19**
Committee:
Dr. Devi Parikh (Advisor), School of Interactive Computing, Georgia Institute of Technology
Dr. Dhruv Batra, School of Interactive Computing, Georgia Institute of Technology
Dr. Judy Hoffman, School of Interactive Computing, Georgia Institute of Technology
Dr. Stefan Lee, School of Electrical Engineering and Computer Science, Oregon State University
Dr. Been Kim, Google Brain
Abstract:
Deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. While these models enable superior performance, their increasing complexity and lack of decomposability into individually intuitive components makes them hard to interpret. Consequently, when today's intelligent systems fail, they fail spectacularly disgracefully, giving no warning or explanation.
Towards the goal of making deep networks interpretable, trustworthy and unbiased, in this thesis, I will present my work on building algorithms that provide explanations for decisions emanating from deep networks in order to —
1. understand/interpret why the model did what it did,
2. correct unwanted biases learned by AI models, and
3. encourage human-like reasoning in AI.