ISyE Seminar- Robert Nowak

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Friday September 10, 2021
      11:00 am - 12:00 pm
  • Location: Virtual
  • Phone:
  • URL: Virtual Link
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: What Kinds of Functions Do Neural Networks Learn?

Full Summary: Abstract: Neural nets have made an amazing comeback during the past decade. Their empirical success has been truly phenomenal, but neural nets are poorly understood in a mathematical sense compared to classical methods like splines, kernels, and wavelets.  This talk describes recent steps towards a mathematical theory of neural networks comparable to the foundations we have for classical nonparametric methods. Surprisingly, neural nets are minimax optimal in a wide variety of classical univariate function spaces, including those handled by splines and wavelets. In multivariate settings, neural nets are  solutions to data-fitting problems cast in entirely new types of multivariate function spaces characterized through total variation (TV) measured in the Radon transform domain.  And deep (multilayer) neural nets naturally represent compositions of functions in these Radon-BV (bounded variation) spaces.  Remarkably, this theory provides novel explanations for many notable empirical discoveries in deep learning, including the benefits of “skip connections” and sparse and low-rank “weight” matrices. Radon-BV spaces set the stage for the nonparametric theory of neural nets.
 

Title: What Kinds of Functions Do Neural Networks Learn?

Abstract: Neural nets have made an amazing comeback during the past decade. Their empirical success has been truly phenomenal, but neural nets are poorly understood in a mathematical sense compared to classical methods like splines, kernels, and wavelets.  This talk describes recent steps towards a mathematical theory of neural networks comparable to the foundations we have for classical nonparametric methods. Surprisingly, neural nets are minimax optimal in a wide variety of classical univariate function spaces, including those handled by splines and wavelets. In multivariate settings, neural nets are  solutions to data-fitting problems cast in entirely new types of multivariate function spaces characterized through total variation (TV) measured in the Radon transform domain.  And deep (multilayer) neural nets naturally represent compositions of functions in these Radon-BV (bounded variation) spaces.  Remarkably, this theory provides novel explanations for many notable empirical discoveries in deep learning, including the benefits of “skip connections” and sparse and low-rank “weight” matrices. Radon-BV spaces set the stage for the nonparametric theory of neural nets.

Bio: Rob holds the Nosbusch Professorship in Engineering at the University of Wisconsin-Madison. His research focuses on signal processing, machine learning, optimization, and statistics.

Additional Information

In Campus Calendar
Yes
Groups

School of Industrial and Systems Engineering (ISYE)

Invited Audience
Faculty/Staff, Postdoc, Public, Graduate students, Undergraduate students
Categories
Seminar/Lecture/Colloquium
Keywords
No keywords were submitted.
Status
  • Created By: sbryantturner3
  • Workflow Status: Published
  • Created On: Aug 30, 2021 - 3:42pm
  • Last Updated: Sep 7, 2021 - 3:11pm