PhD Proposal by Abhishek Das

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Monday December 17, 2018
      11:00 am - 12:30 pm
  • Location: CCB 247
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Building Agents That Can See, Talk, and Act

Full Summary: No summary paragraph submitted.

Title: Building Agents That Can See, Talk, and Act

----------------

 

Date: Monday, December 17th, 2018

Time: 11:00am to 12:30pm (ET)

Location: CCB 247

Bluejeanshttps://bluejeans.com/6506900315

 

Abhishek Das

Computer Science Ph.D. Student

School of Interactive Computing

Georgia Institute of Technology

abhishekdas.com

 

Committee:

----------------

Dr. Dhruv Batra (Advisor; School of Interactive Computing, Georgia Institute of Technology & Facebook AI Research)

Dr. Devi Parikh (School of Interactive Computing, Georgia Institute of Technology & Facebook AI Research)

Dr. Joelle Pineau (McGill University & Facebook AI Research, Montréal)

Dr. James Hays (School of Interactive Computing, Georgia Institute of Technology & Argo AI)

Dr. Jitendra Malik (University of California, Berkeley & Facebook AI Research)

 

Abstract:

----------------

My research goal is to build intelligent agents (the next generation of Cortana, Alexa, Siri, etc.) that possess the ability to perceive the rich visual environment around us, communicate this understanding in natural language to humans and other agents, and execute actions in a physical environment. Even a small advance towards such agents can fundamentally change our lives – from assistive chatbots for the visually impaired, to natural language interaction with self-driving cars and in-home mobile robots!

 

Towards this grand goal, in this thesis, I will present my work on

1) visual dialog (see+talk) — agents capable of holding free-form conversations about images and reinforcement learning-based algorithms to train these visual dialog agents without exhaustively collecting human-annotated datasets (via self-play),

2) embodied question answering (see+talk+act) — agents with hierarchical and modular navigation architectures that can move around, actively perceive, and answer questions in simulated environments,

3) targeted multi-agent communication (multi-agent see+talk+act) — agents that can communicate with each other in a targeted manner for cooperative tasks, such that they learn both what messages to send and who to communicate with, solely from downstream task-specific reward without any communication supervision.

 

In proposed work, I will develop architectures for multi-agent embodied question answering, where the goal is to answer complex 3D visual reasoning questions (such as "What size is the cylinder that is left of the brown metal thing that is left of the big sphere?") by appropriately combining first-person active perception with navigation actions and inter-agent communication.

----------------

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Dec 11, 2018 - 3:59pm
  • Last Updated: Dec 11, 2018 - 3:59pm