*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Ph.D. Thesis Proposal Announcement
Title: A Reduction Framework for Extended Formulations and a Faster Algorithm for Online Convex Optimization
Daniel Zink
Ph.D. Student
Algorithms, Combinatorics, and Optimization
H. Milton Stewart School of Industrial & Systems Engineering
Georgia Institute of Technology
Date: Monday, September 12th, 2016
Time: 1pm
Location: ISyE 226A
Committee
----------------
Dr. Sebastian Pokutta (Advisor, School of Industrial & Systems Engineering, Georgia Institute of Technology)
Dr. Santosh S. Vempala (School of Computer Science, Georgia Institute of Technology)
Dr. Santanu S. Dey (School of Industrial & Systems Engineering, Georgia Institute of Technology)
Abstract
-----------------------------------------------------------
Extended formulations gained a lot of attention recently due to a result showing that the TSP polytope does not allow an extended formulation of subexponential size. In the first part I will present a new encoding independent view on extended formulations enabling a reduction framework that transforms lower bounds on the size of approximate extended formulations between different problems. We were able to recover inapproximability results previously shown on a case by case basis and show new lower bounds for many problems, e.g., VertexCover, Max-3-SAT and bounded degree IndependentSet. This is joint work Gábor Braun and Sebastian Pokutta.
With applications like prediction from expert advise, online shortest path and portfolio selection the online optimization model is popular both from a theoretical and an applications perspective. In the case of online convex optimization online versions of conditional gradient descent (aka Frank-Wolfe) are heavily used due to their advantage of employing a linear optimization oracle as opposed to a computationally very expensive projection step as in Online Gradient Descent. In this second part I will show an algorithm using a linear augmentation oracle instead of the linear optimization oracle while maintaining the same asymptotic regret bounds. Augmentation oracles are usually much faster than their optimization counter parts leading to a significant speed improvement for this type of online algorithm. This is joint work with Gábor Braun and Sebastian Pokutta.
-----------------------------------------------------------