*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Modeling Structure for Visual Understanding and Generation
Date: Thursday, November 29, 2018
Time: 11:30AM - 1:00PM (ET)
Location: TSRB 223
Jianwei Yang
Ph.D. Student in Computer Science
School of Interactive Computing
Georgia Institute of Technology
https://www.cc.gatech.edu/~jyang375/
Committee:
Dr. Devi Parikh (Advisor, School of Interactive Computing, Georgia Institute of Technology)
Dr. Dhruv Batra (School of Interactive Computing, Georgia Institute of Technology)
Dr. David J. Crandall (School of Informatics, Computing, and Engineering, Indiana University)
Dr. Stefan Lee (School of Interactive Computing, Georgia Institute of Technology)
Abstract:
The world around us is highly structured. Objects interact with each other in predictable ways (e.g., mugs are often on tables, keyboards are often below computer monitors, the sky is in the background, grass is often green). This structure manifests itself in the visual data that captures the world around us, and in text that describes it. In this thesis, the goal is to leverage this structure in our visual world for visual understanding and its dual problem visual generation, both with and without interactions with language. Specifically, this thesis work makes the following contributions.
On visual understanding side:
1) Proposed an effective approach for scene graph generation, that learns to compute the relationship-ness between objects and prune the dense graph accordingly before performing graph labeling.
2) Proposed a language-based meta-active-learning framework for an agent, that can learn to ask informative questions to the human/oracle based on a structured representation of the scene, and then learn its visual recognition models incrementally.
On visual generation side:
1) Proposed a new model for generating images by considering the layer-by-layer structure in images, that generates image background and foreground step-by-step and compose them into a single image with proper spatial configuration.
2) In the proposed work, I will further leverage the layer-by-layer structure in images and text for visual generation conditioned on language. Specifically, given a description of images, the model learns to extract the structure in the sentence, and then generate the image layer-by-layer accordingly so that the generated images are consistent with the given description.