*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Abstract:
The talk will dive slightly deeper into recent topics in reinforcement learning, which to many at Georgia Tech is a favorite tool for constructing policies for control solutions in robotics and elsewhere. From the beginning, control foundations have lurked behind the RL curtain: Watkins’ Q-function looks suspiciously like the Hamiltonian in Pontryagin’s minimum principle, and (since Van Roy’s thesis) it has been known that our beloved adjoint operators are the key to understanding what is going on with TD-learning. This talk will briefly survey the goals and foundations of RL, and present new work showing how to dramatically accelerate convergence
Bio:
Sean Meyn was raised by the beach in California. Following his BA in mathematics at UCLA, he moved on to pursue a PhD with Peter Caines at McGill University. After about 20 years as a professor of ECE at the University of Illinois, in 2012 he moved to beautiful Gainesville. He is now Professor and Robert C. Pittman Eminent Scholar Chair in the Department of Electrical and Computer Engineering at the University of Florida, and director of the Laboratory for Cognition and Control. He currently holds the Inria International Chair, Paris France. His interests span many aspects of stochastic control, stochastic processes, information theory, and optimization. For the past decade, his applied research has focused on engineering, markets, and policy in energy systems.