Phd Proposal by Ramprasaath R. Selvaraju

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Tuesday December 11, 2018
      4:30 pm - 6:30 pm
  • Location: CCB 312A
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Towards Interpretable, Transparent and Unbiased AI

Full Summary: No summary paragraph submitted.

Title: Towards Interpretable, Transparent and Unbiased AI

 

Date: Tuesday, December 11 2018

Time: 4:30PM - 6:30PM (ET)

LocationCCB 312A

 

Ramprasaath R. Selvaraju

Ph.D. Student in Computer Science

School of Interactive Computing 

Georgia Institute of Technology

ramprs.github.io

 

Committee:

Dr. Devi Parikh (Advisor, School of Interactive Computing, Georgia Institute of Technology)

Dr. Dhruv Batra (School of Interactive Computing, Georgia Institute of Technology)

Dr. Stefan Lee (School of Interactive Computing, Georgia Institute of Technology)

Dr. Been Kim (Sr. Research Scientist, Google Brain)

 

Abstract:

Deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. While these models enable superior performance, their increasing complexity and lack of decomposability into individually intuitive components makes them hard to interpret. Consequently, when today's intelligent systems fail, they fail spectacularly disgracefully, giving no warning or explanation.

 

Towards the goal of making deep networks Interpretable, Transparent and Unbiased, in my thesis I will present my work on building algorithms that provide explanations for decisions emanating from deep networks in order to — 

1. understand why the model did what it did,

2. diagnose network errors,

3. help users build appropriate trust, and

4. enable knowledge transfer between humans and AI. 

 

In my proposed work, I will show how we can leverage explanations to teach AI systems to correct unwanted biases learned during training, thus improving visual grounding in these systems and making them more trustworthy.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Dec 10, 2018 - 9:05am
  • Last Updated: Dec 10, 2018 - 9:05am