*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Name: Bo Dai, Research Scientist at Google Brain
Date: Thursday, June 2, 2022 at 11:00 am
Link: https://gatech.zoom.us/s/98554282532 Code: 690847
Title: Push Reinforcement Learning towards Practical via Representation Learning
Abstract: Discovering relevant transformations of complex data, often referred to as representation learning, has achieved remarkable success particularly in areas such as computer vision and natural language processing. However, the power of representation learning has not been fully exploited yet in reinforcement learning (RL). In this talk, I will present our recent work on representation learning in RL. We designed practical algorithms for extracting useful representations, with the goal of improving sample efficiency and empirical performance on different tasks in RL. Specifically, we first warm-up with illustrating the power of representation in automatic dialog evaluation via off-policy evaluation methods. Second, we further investigate the representation learning for control tasks to achieve the delicate tradeoff in exploration vs. exploitation. Finally, we consider the representation learning for transferring the knowledge across different planning tasks. These successes demonstrate the importance of representation learning in RL, which is the key to push RL towards theoretically sound and practically applicable.
Bio: Bo Dai is a staff research scientist in Google Brain. He obtained his Ph.D. from Georgia Tech. His research interest lies in developing principled and practical machine learning methods for data-driven decision making, especially on the applications in reinforcement learning, natural language processing, and operation research, etc. He is the recipient of the best paper award of AISTATS and NeurIPS workshop. He regularly serves as area chair or senior program committee member at major AI/ML conferences such as ICML, NeurIPS, AISTATS, and ICLR.