*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Learning over Functions, Distributions and Dynamics via Stochastic Optimization
Bo Dai
School of Computational Science and Engineering
College of Computing
Georgia Institute of Technology
Date: Monday, July 23rd, 2018
Time: 12:00 PM to 2:00 PM EST
Location: Klaus Advanced Computing Building Room 1315
Committee:
-------------
Dr. Le Song (Advisor), School of Computational Science and Engineering, Georgia Institute of Technology
Dr. Hongyuan Zha, School of Computational Science and Engineering, Georgia Institute of Technology
Dr. Guanghui Lan, H. Milton Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology
Dr. Byron Boots, School of Interactive Computing, Georgia Institute of Technology
Dr. Arthur Gretton, Gatsby Computational Neuroscience Unit, University College London
Abstract:
-------------
Machine learning has recently witnessed revolutionary success in a wide spectrum of domains. The learning objectives, model representation, and learning algorithms are important components of machine learning methods. To construct successful machine learning methods that are naturally fit to different problems with different targets and inputs, one should consider these three components together in a principled way.
This dissertation aims for developing a unified learning framework for such purpose. The heart of this framework is the optimization with the integral operator in infinite-dimensional spaces. Such integral operator representation view in the proposed framework provides us an abstract tool for considering these three components together for plenty of machine learning tasks and will lead to efficient algorithms equipped with flexible representation achieving better approximation ability, scalability, and statistical properties.
We mainly investigate several motivated machine learning problems, i.e., kernel methods, Bayesian inference, invariance learning, policy evaluation and policy optimization in reinforcement learning, as the special cases of the proposed framework with different instantiations of the integral operator. These instantiations result in the learning problems with inputs as functions, distributions, and dynamics. The corresponding algorithms are derived to handle the particular integral operators via efficient and provable stochastic approximation by exploiting the particular structure properties in the operators. The proposed framework and the derived algorithms are deeply rooted in functional analysis, stochastic optimization, nonparametric method, and Monte Carlo approximation, and contributed to several sub-fields in machine learning community, including kernel methods, Bayesian inference, and reinforcement learning.
We believe the proposed framework is a valuable tool for developing machine learning methods in a principled way and can be potentially applied to many other scenarios.