ML@GT Seminar: Mike Rabbat

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday April 4, 2018
      12:00 pm - 1:30 pm
  • Location: Atlanta, GA
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact

Kyla Hanson: khanson@cc.gatech.edu

Summaries

Summary Sentence: Please join us for a talk from Mr. Mike Rabbat

Full Summary: No summary paragraph submitted.

Media
  • Machine Learning Machine Learning
    (image/jpeg)
Related Files

Bio: Mike Rabbat is a Research Scientist in the Facebook AI Research group. He is currently on leave from McGill University where he is an Associate Professor of Electrical and Computer Engineering. Mike received a Masters from Rice University in 2003 and a PhD from the University of Wisconsin in 2006, both under the supervision of Robert Nowak. Mike’s research interests are in the areas of networks, statistical signal            processing, and machine learning. Currently, he is working on gossip algorithms for distributed processing, distributed tracking, and algorithms and theory for signal processing on graphs.

 

Title: Asynchronous Subgradient-Push

 

Abstract: We consider a multi-agent framework for distributed optimization where each agent in the network has access to a local convex function and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents' local functions.  We propose an algorithm wherein each agent operates asynchronously and independently of the other agents in the network. When the local functions are strongly-convex with Lipschitz-continuous gradients, we show that a subsequence of the iterates at each agent converges to a neighbourhood of the global minimum, where the size of the neighbourhood depends on the degree of asynchrony in the multi-agent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that Asynchronous Subgradient-Push can minimize the global objective faster than state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size. This is joint work with Mahmoud Assran.

 

 

 

Additional Information

In Campus Calendar
No
Groups

ML@GT

Invited Audience
Faculty/Staff, Public, Undergraduate students
Categories
No categories were selected.
Keywords
No keywords were submitted.
Status
  • Created By: Kyla J. Reese
  • Workflow Status: Published
  • Created On: Apr 4, 2018 - 10:16am
  • Last Updated: Apr 4, 2018 - 10:17am