*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Efficient and Principled Robot Learning: Theory and Algorithms
Ching-An Cheng
Robotics PhD Candidate
School of Interactive Computing
Georgia Institute of Technology
Date: Thursday, Dec 5, 2019
Time: 3:30pm - 5:30pm (EST)
Location: Coda C1015 Vinings
Committee:
Dr. Byron Boots (advisor), School of Interactive Computing, Georgia Institute of Technology
Dr. Seth Hutchinson, School of Interactive Computing, Georgia Institute of Technology
Dr. Karen Liu, Department of Computer Science School of Engineering, Stanford University
Dr. Evangelos Theodorou, School of Aerospace Engineering, Georgia Institute of Technology
Dr. Geoff Gordon, Microsoft Research and Machine Learning Department, Carnegie Mellon University
Abstract:
Roboticists have long envisioned fully-automated robots that can operate reliably in unstructured environments. This is an exciting but extremely difficult problem; in order to succeed, robots must reason about sequential decisions and their consequences in face of uncertainty. As a result, in practice, the engineering effort required to build reliable robotics systems is both demanding and expensive. This research aims to provide a set of techniques for efficient and principled robot learning. We approach this challenge from a theoretical perspective that more closely integrates analysis and practical needs. These theoretical principles are applied to design better algorithms in two important aspects of robot learning: policy optimization and development of structural policies. This research uses and extends online learning, optimization, and control theory, and is demonstrated in applications including reinforcement learning, imitation learning, and structural policy fusion. A shared feature across this research is the reciprocal interaction between the development of practical algorithms and the advancement of abstract analyses. Real-world challenges force the rethinking of proper theoretical formulations, which in turn lead to refined analyses and new algorithms that can rigorously leverage these insights to achieve better performance.