PhD Defense by Yash Goyal

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Monday March 9, 2020 - Tuesday March 10, 2020
      11:00 am - 12:59 pm
  • Location: Coda 1215 Midtown
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Towards Transparent and Grounded Visual AI Systems

Full Summary: No summary paragraph submitted.

Title: Towards Transparent and Grounded Visual AI Systems

----------------

Yash Goyal
Ph.D. Candidate in Computer Science
School of Interactive Computing
Georgia Institute of Technology
https://www.cc.gatech.edu/~ygoyal3/

Date: Monday, March 9th, 2020
Time: 11:00 am to 1:00 pm (ET)
Location: Coda 1215 Midtown

Committee:
----------------
Dr. Dhruv Batra (Advisor; School of Interactive Computing, Georgia Institute of Technology & Facebook AI Research)
Dr. Devi Parikh (School of Interactive Computing, Georgia Institute of Technology & Facebook AI Research)
Dr. Mark Riedl (School of Interactive Computing, Georgia Institute of Technology)
Dr. Trevor Darrell (University of California, Berkeley)
Dr. Stefan Lee (Oregon State University)

Abstract:
----------------

My research goal is to build transparent and grounded AI systems. In my thesis, I try to answer the question -- "Do deep visual models make their decisions for the right reasons?" in two ways:

 

1. Visual grounding. Grounding is essential to build reliable and generalizable systems that are not driven by dataset biases. In the context of the task of Visual Question Answering (VQA), we would expect models to be looking at the right regions in the image while answering a question. We address this issue of visual grounding in VQA by proposing a) two new benchmarking datasets to test visual grounding, and b) a new VQA model that is visually grounded by design.

 

2. Transparency. Transparency in AI systems can help system designers find their failure modes and provide guidance to teach humans. We developed techniques for generating explanations from deep models that give us insights into what they are basing their decisions on. Specifically, we generate counterfactual visual explanations and show how we can use such explanations to teach humans.

 

Most of the interpretability works rely on correlations to generate explanations which might or might not reflect the true underlying mechanism in the deep model. This can result in disastrous consequences in high-risk domains such as healthcare, ethics, etc. To prevent this, there is a need to reason about the causal relationship between explanations and model decisions. Towards the end of the talk, I will present my recent work on generating causal explanations.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Mar 6, 2020 - 3:57pm
  • Last Updated: Mar 6, 2020 - 3:57pm