PhD Defense by Yiling Luo

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday December 21, 2022
      11:00 am - 1:00 pm
  • Location: Groseclose 303
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Stochastic Methods in Model Estimation: New Algorithms and New Properties

Full Summary: No summary paragraph submitted.

Thesis Title: Stochastic Methods in Model Estimation: New Algorithms and New Properties

 

Thesis Committee:

1 Dr. Xiaoming Huo (Advisor, ISyE, Gatech)

2 Dr. Yajun Mei (Co-advisor, ISyE, Gatech)

3 Dr. Arkadi Nemirovski (ISyE, Gatech)

4 Dr. Vladimir Koltchinskii (Mathematics, Gatech)

5 Dr. Kai Zhang (Department of Statistics and Operations Research, University of North Carolina, Chapel Hill)

 

Date and Time: Wednesday, December 21st, 11:00 am (EST)

 

In-Person LocationGroseclose 303

Meeting LinkClick here to join the meeting

Meeting ID: 295 937 083 365

Passcode: tXvJLq

 

 

Abstract:

We study the properties of applying stochastic algorithms to solve optimization problems in model estimation. In particular, we investigate the statistical properties of estimators that are based on some stochastic algorithms in Chapters 2-5; we propose a new stochastic algorithm and study its optimization property in Chapter 6. 

 

We summarize the main contents in each chapter as follows. 

 

In Chapter 2, we explore a directional bias phenomenon in both stochastic gradient descent and gradient descent, and examine their implications for the resulting estimators. We would argue that the outcome from the stochastic gradient descent may lead to a better generalization error bound.

 

In Chapter 3, we study a property of implicit regularization by a variance reduction version of the stochastic mirror descent algorithm. The phenomenon of implicit regularization by applying certain algorithms has attracted a lot of attention, and its existence with the variance reduction based stochastic algorithm is new. 

 

In Chapter 4, we establish the equivalence between the variance reduced stochastic mirror descents with a technique that has been developed in information theory -- variance reduced stochastic natural gradient descent. The purpose of establishing such an equivalence is that the properties of both problems can automatically be shared with each other. 

 

In Chapter 5, we study a recent algorithm -- ROOT-SGD -- for the online learning problem, and we estimate the covariance of the estimator that is computed via the ROOT-SGD algorithm. Our covariance estimators quantify the uncertainty in the ROOT-SGD algorithm, which are useful for statistical inference. 

 

In Chapter 6, we study a constrained strongly convex problem -- the entropic OT, and we propose a primal-dual stochastic algorithm with variance reduction to solve it. We show that the computational complexity of our algorithm is better than other first-order algorithms for solving the entropic OT. 

 

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Dec 7, 2022 - 8:40am
  • Last Updated: Dec 7, 2022 - 8:40am