*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
The Institute for Robotics and Intelligent Machines and the Machine Learning Center present “Deep Learning to Learn” by Pieter Abbeel of Berkeley University. The event will be held in the auditorium of the Callaway GTMI Building from 12:15-1:15 p.m. and is open to the public.
Abstract
Reinforcement learning and imitation learning have seen success in many domains, including autonomous helicopter flight, Atari, simulated locomotion, Go, robotic manipulation. However, sample complexity of these methods remains very high. In contrast, humans can pick up new skills far more quickly. To do so, humans might rely on a better learning algorithm or on a better prior (potentially learned from past experience), and likely on both. In this talk I will describe some recent work on meta-learning for action, where agents learn the imitation/reinforcement learning algorithms and learn the prior. This has enabled acquiring new skills from just a single demonstration or just a few trials. While designed for imitation and RL, our work is more generally applicable and also advanced the state of the art in standard few-shot classification benchmarks such as omniglot and mini-imagenet.
Bio
Pieter Abbeel is professor and director of the Robot Learning Lab at UC Berkeley (2008- ), co-founder of covariant.ai (2017- ), co-founder of Gradescope (2014- ), advisor to OpenAI, founding faculty partner of AI@TheHouse, and an advisor to many AI/Robotics start-ups.
Abbeel works in machine learning and robotics. In particular, his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot tying, basic assembly, laundry organizing, locomotion, and vision-based robotic manipulation.
Abbeel has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, DARPA, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). His work is frequently featured in the popular press, including The New York Times, BBC, Bloomberg, The Wall Street Journal, Wired, Forbes, Tech Review, and NPR.