*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Grounded Vision and Language Understanding
Date: Thursday, November 29 2018
Time: 1:00PM - 2:15PM (ET)
Location: TSRB 223
Jiasen Lu
Ph.D. Student in Computer Science
School of Interactive Computing
Georgia Institute of Technology
https://www.cc.gatech.edu/~jlu347/
Committee:
Dr. Devi Parikh (Advisor, School of Interactive Computing, Georgia Institute of Technology)
Dr. Dhruv Batra (School of Interactive Computing, Georgia Institute of Technology)
Dr. Mark Riedl (School of Interactive Computing, Georgia Institute of Technology)
Dr. Jason J. Corso (Electrical Engineering and Computer Science Dept., University of Michigan)
Dr. Richard Socher (Salesforce Research)
Abstract:
The world around us involves multiple modalities. One of the major challenges in modeling different modalities jointly is how to induce appropriate grounding in models given the heterogeneity of the data. Which parts of the image and question should the model focus on when answering a question about an image? When should it rely on visual data vs. just the language model when describing an image? How can we integrate object detectors to produce fluent but visually grounded image captions? How can we disentangle "what to say" from "how to say it" when automatically generating questions about images?
In this thesis, I take steps towards studying how inducing appropriate grounding in deep models improves multi-modal AI capabilities, in the context of vision and language understanding.
Specifically, I will present --
1) how to ground visual question answering models in appropriate regions of the image and appropriate phrases in the question to more accurately answer questions about images
2) how to provide skip connections to an image captioning model so that it can rely on just the language model for some words in the caption that are not visual
3) how to ground image captioning models in object detections by combining symbolic and deep learning approaches to avoid hallucinations of visual concepts in image captions
In proposed work, I will study how to disentangle "what to ask" and "how to ask it" when generating a question -- that is, grounding question generation in the "intention" of the question -- in the context of a multi-agent image guessing game.