Pair of IC Assistant Professors Earn Awards for Research in Visual Question Answering

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Contact

David Mitchell

Communications Officer

david.mitchell@cc.gatech.edu

Sidebar Content
No sidebar content submitted.
Summaries

Summary Sentence:

Assistant Professors Dhruv Batra and Devi Parikh earned the Young Investigator award from the Office of Naval Research and a Google Research Faculty Award, respectively.

Full Summary:

No summary paragraph submitted.

Media
  • Dhruv Batra Dhruv Batra
    (image/jpeg)
  • Devi Parikh Devi Parikh
    (image/jpeg)

Two assistant professors in the School of Interactive Computing received awards for their respective research in the field of Visual Question Answering (VQA) last week.

Assistant Professor Dhruv Batra earned a Young Investigator award from the Office of Naval Research and Devi Parikh earned a Google Research Faculty Award.

The grants will provide $510,000 over three years for Batra's research, Explainable and Trustworthy Intelligent Systems, and $85,681 for one year for Parikh’s, Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering.

In VQA, given an image and a free-form natural language question about the image, the machine's task is to automatically produce a concise, accurate, free-form, natural language answer.

Batra’s research aims to 1) develop theory, algorithms, and implementations for transparent deep neural networks that are able to provide explanations for their predictions, and 2) to study the effect of developed transparent neural networks and explanations on user trust and perceived trustworthiness with VQA as the AI testbed.

Similarly, Parikh’s research aims to build a more balanced VQA dataset that reduces language biases and allows evaluation protocols that more accurately reflect progress in image understanding. Another goal is to train a VQA model that leverages that balanced dataset to promote more detailed image understanding, and develop a counter-example based explanation modality, where the VQA model justifies its answer by providing examples of images it believes are similar to the image at hand.

The result will be that users can better trust the VQA model and identify its oncoming failures, according to the proposal.

"I think this line of research addresses a fundamental problem for the future of AI -- how do we make AI trustworthy?" Batra said. "How do we build intelligent systems that explain why they are making the predictions they are making?"

Additional Information

Groups

College of Computing, School of Interactive Computing

Categories
No categories were selected.
Related Core Research Areas
Robotics
Newsroom Topics
No newsroom topics were selected.
Keywords
College of Computing, School of Interactive Computing, visual question answering, dhruv batra, devi parikh
Status
  • Created By: David Mitchell
  • Workflow Status: Published
  • Created On: Feb 27, 2017 - 5:03pm
  • Last Updated: Feb 28, 2017 - 2:19pm