*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Learning on Functions via Stochastic Optimization
Bo Dai
School of Computational Science and Engineering
College of Computing
Georgia Institute of Technology
Date: Wednesday, December 20, 2017
Time: 8:30 AM to 10:30 AM EST
Location: KACB 1315
Committee
-------------
Dr. Le Song (Advisor), School of Computational Science and Engineering, Georgia Institute of Technology
Dr. Hongyuan Zha, School of Computational Science and Engineering, Georgia Institute of Technology
Dr. Byron Boots, School of Interactive Computing, Georgia Institute of Technology
Dr. Guanghui Lan, H. Milton Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology
Dr. Arthur Gretton, Gatsby Computational Neuroscience Unit, University College London
Abstract
-------------
Machine learning has recently witnessed revolutionary success in a wide spectrum of domains. Most of these applications involve learning with complex inputs and/or outputs, which could be graphs, functions, distributions, and even dynamics. The success of these machine learning applications often requires at least two factors: i) the exploitation of structure information in learning models, and ii) the utilization of huge amount of data. However, the structure information corresponds delicate conditions in optimization point of view, while a huge amount of data requires algorithms efficient and scalable. Integrating both parts can be very challenging, from both computational and theoretical perspectives.
In this dissertation, we mainly focused on large-scale learning on functions problems, which includes kernel methods as the inputs are functions, Bayesian inference as the inputs are log-likelihoods, and learning with invariances and reinforcement learning as the inputs are dynamics. By exploiting the special structures with stochastic optimization in function spaces, we developed principled and practical algorithms for each problem. Moreover, the new perspective sheds light on existing open problems in particular areas.