Phd Defense by Michael Cogswell

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Tuesday March 10, 2020 - Wednesday March 11, 2020
      12:00 pm - 1:59 pm
  • Location: Coda C1015 Vinings
  • Phone:
  • URL: BlueJeans Link
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Disentangling Neural Networks Representations for Improved Generalization

Full Summary: No summary paragraph submitted.

Title: Disentangling Neural Networks Representations for Improved Generalization

Michael Cogswell
Ph.D. Candidate
School of Interactive Computing
Georgia Institute of Technology

 

Date: Tuesday, March 10th, 2020
Time: 12:00 pm to 2:00 pm (ET)
Location: Coda C1015 Vinings

BlueJeans: https://bluejeans.com/714664211

 

Committee:
Prof. Dhruv Batra, School of Interactive Computing, Georgia Institute of Technology
Prof. Devi Parikh, School of Interactive Computing, Georgia Institute of Technology
Prof. James Hays, School of Interactive Computing, Georgia Institute of Technology
Prof. Ashok Goel, School of Interactive Computing, Georgia Institute of Technology
Prof. Stefan Lee, Oregon State University

 

Abstract:

Despite the increasingly broad perceptual capabilities of neural networks, applying them to new tasks requires significant engineering effort in data collection and model design. In part, this is due to the entangled nature of the representations learned by these models. Entangled representations capture spurious patterns that are only useful for specific examples instead of factors of variation that explain the data generally. We show that encouraging representations to be disentangled makes them generalize better.

In this thesis we identify three kinds of entangled representations, enforce disentanglement in each case, and show that more general representations result. These perspectives treat disentanglement as statistical independence of features in image classification, language compositionality in goal driven dialog, and latent intention priors in visual dialog. By increasing the generality of neural networks through disentanglement we hope to reduce the effort required to apply neural networks to new tasks and highlight the role of inductive biases like disentanglement in neural network design.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Mar 4, 2020 - 1:43pm
  • Last Updated: Mar 4, 2020 - 1:43pm