*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Carnegie Mellon’s Abhinav Gupta presents “Supersizing Self-Supervision: Learning Perception and Action without Human Supervision” as part of the IRIM Robotics Seminar Series. The event will be held in the Engineered Biosystems Building (EBB), CHOA Conference Room, from 12-1 p.m. and is open to the public.
Abstract
In this talk, I will discuss how to learn representation for perception and action without using any manual supervision. First, I am going to discuss how we can learn ConvNets for vision in a completely unsupervised manner using auxiliary tasks. Specifically, I will demonstrate how the spatial context in images and viewpoint changes in videos can be used to train visual representations. Next, I will briefly introduce NEIL (Never Ending Image Learner), a computer program that runs 24/7 to automatically build visual detectors and common sense knowledge from web data. Finally, I will discuss how we can perform end-to-end learning for actions using self-supervision. I will also discuss the scaling issues; e.g, will this self-supervised learning scale-up for multiple tasks? How can we use multiple robots to scale-up the learning? I will demonstrate how competition across multiple robots is significantly better than collaboration for tasks such as grasping.
Bio
Abhinav Gupta is an assistant professor at the Robotics Institute at Carnegie Mellon University. Abhinav’s research focuses on scaling-up learning using self-supervision. Specifically, he is interested in how self-supervised systems can effectively use data to learn visual representation, common sense, and representation for actions. To tackle this challenge, he has created a system called Never Ending Image Learner (NEIL). Rated as one of the top-10 CNN Ideas of 2013, NEIL is a computer program that runs 24 hours a day, 7 days a week to extract visual knowledge and common sense on a vast scale. In recent work, he has been extending the NEIL system to physical robots to learn actions in a self-supervised manner. Abhinav is a recipient of the PAMI Young Research Award, Sloan Research Fellowship, Bosch Young Faculty Fellowship, YPO Fellowship, ICRA Best Student Paper award, Google Faculty Research Award, and the ECCV Best Paper Runner-up Award. Newsweek, The Wall Street Journal, Wired, Slashdot, and the BBC have featured his research in publications.