PhD Defense by Ramprasaath R. Selvaraju

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Monday March 23, 2020 - Tuesday March 24, 2020
      12:00 pm - 1:59 pm
  • Location: REMOTE
  • Phone:
  • URL: BlueJeans Link
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Explaining Model Decisions and Correcting them via Human Feedback

Full Summary: No summary paragraph submitted.

Title: Explaining Model Decisions and Correcting them via Human Feedback

 

Ramprasaath R. Selvaraju

 

Ph.D. Candidate in Computer Science

School of Interactive Computing

Georgia Institute of Technology

http://ramprs.github.io/

 

Date: Monday, March 23rd, 2020

Time: 12:00-2:00 PM (EST)

BlueJeans: https://bluejeans.com/4346160082

**Note: this defense is remote-only due to the institute's guidelines on COVID-19**

 

Committee:

 

Dr. Devi Parikh (Advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Dhruv Batra, School of Interactive Computing, Georgia Institute of Technology

Dr. Judy Hoffman, School of Interactive Computing, Georgia Institute of Technology

Dr. Stefan Lee, School of Electrical Engineering and Computer Science, Oregon State University

Dr. Been Kim, Google Brain

 

Abstract:

 

Deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. While these models enable superior performance, their increasing complexity and lack of decomposability into individually intuitive components makes them hard to interpret. Consequently, when today's intelligent systems fail, they fail spectacularly disgracefully, giving no warning or explanation.

 

Towards the goal of making deep networks interpretable, trustworthy and unbiased, in this thesis, I will present my work on building algorithms that provide explanations for decisions emanating from deep networks in order to —

 

1. understand/interpret why the model did what it did,

2. correct unwanted biases learned by AI models, and

3. encourage human-like reasoning in AI.

 

 

Related Links

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Mar 20, 2020 - 3:45pm
  • Last Updated: Mar 20, 2020 - 3:45pm