*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Artificial Intelligence Institute for Advances in Optimization (AI4OPT) is a NSF funded AI institute jointly between Georgia Tech and several other institutions. Starting this Fall, the institute is kicking off a new seminar series, broadly on AI and Optimization. The weekly seminar announcements will be sent in the new ai4opt-seminars mailing list. To receive these announcements, please subscribe here: https://lists.isye.gatech.edu/mailman/listinfo/ai4opt-seminars
We have the following lineup of speakers for the Fall semester (with a few more that will be added).
AI4OPT Seminar Series
Thursday, September 29, 2022, Noon – 1:00 pm
Location: Virtual Meeting
Meeting Link: https://gatech.zoom.us/j/99381428980
Speaker: Satyen Kale
Optimization Algorithms for Heterogeneous Clients in Federated Learning
Abstract: Federated Learning has emerged as an important paradigm in modern large-scale machine learning, where the training data remains distributed over a vast number of clients, which may be phones, network sensors, hospitals, etc. A major challenge in the design of optimization methods for Federated Learning is the heterogeneity (i.e. non-iid nature) of client data. This problem affects the currently dominant algorithm deployed in practice known as Federated Averaging (FedAvg): we provide results for FedAvg quantifying the degree to which this problem causes unstable or slow convergence. We develop two optimization algorithms to handle this problem for two different settings of Federated Learning. In the cross-silo setting of Federated Learning, we propose a new algorithm called SCAFFOLD which uses control variates to correct for client heterogeneity and prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. In the cross-device setting of Federated Learning, we propose a general framework called Mime which mitigates client-drift and adapts arbitrary centralized optimization algorithms (e.g. SGD, Adam, etc.) to Federated Learning via a combination of control-variates and server-level statistics (e.g. momentum) at every client-update step. Our theoretical and empirical analyses establish the superiority of SCAFFOLD and Mime over other baselines in their respective settings.
Bio: Satyen Kale is a research scientist at Google Research working in the New York office. His current research is the design of efficient and practical algorithms for fundamental problems in Machine Learning and Optimization. More specifically, he is interested in decision-making under uncertainty, statistical learning theory, combinatorial optimization, and continuous optimization. His research has been recognized with several awards: a Best Paper Award at ICML 2015, a Best Paper Award at ICLR 2018, and a Best Student Paper Award at COLT 2018. He was a program chair for COLT 2017 and ALT 2019.