*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
ML@GT invites you to a seminar by Daniel Russo, an assistant professor at Columbia Business School.
Global Optimality Guarantees for Policy Gradient Methods
Policy gradients methods are perhaps the most widely used class of reinforcement learning algorithms. These methods apply to complex, poorly understood, control problems by performing stochastic gradient descent over a parameterized class of polices. Unfortunately, due to the multi-period nature of the objective, policy gradient algorithms face non-convex optimization problems and can get stuck in suboptimal local minima even for extremely simple problems. This talk with discus structural properties – shared by several canonical control problems – that guarantee the policy gradient objective function has no suboptimal stationary points despite being non-convex. Time permitting, I’ll also discuss (1) convergence rates that follow as a consequence of this theory and (2) consequences of this theory for policy gradient performed with highly expressive policy classes.
* This talk is based on ongoing joint work with Jalaj Bhandari.
Russo joined the Decision, Risk, and Operations division of the Columbia Business School as an assistant professor in Summer 2017. Prior to joining Columbia, he spent one great year as an assistant professor in the MEDS department at Northwestern's Kellogg School of Management and one year at Microsoft Research in New England as Postdoctoral Researcher. Russo recieved his Ph.D. from Stanford University in 2015, where he was advised by Benjamin Van Roy. In 2011 Russo recieved his BS in Mathematics and Economics from the University of Michigan.
Russo's research lies at the intersection of statistical machine learning and sequential decision-making, and contributes to the fields of online optimization, reinforcement learning, and sequential design of experiments. He is interested in the design and analysis of algorithms that learn over time to make increasingly effective decisions through interacting with a poorly understood environment.