*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Let x denote a vector of n decision variables and Y denote a random vector. A constraint of the form P(f(x,Y) > 0) < eps
is called a chance constraint on the decision variable x. Optimization problems with chance constraints are notoriously hard to solve.
A simple strategy for solving chance constrained problems is to generate N samples according to the distribution P and then impose the constraints f(x,Y_i) < 0, for all i = 1, ..., N (*)
Since Y_i are random samples, there is no hope that decision
variables x that are feasible for (*) are all feasible for the chance
constraint with probability 1. Therefore, one has to allow a probability of error delta. Question: How large should N be as a function of eps and
delta ?
Recently, Calafiore and Campi showed that when f(x,Y) is a convex function of x for every fixed Y we only need N =O(n/delta log(1/eps)). Nemirovski and Shapiro have shown that if the function f(x,Y) is bi-affine and the distribution P has a "concentration-of-measure" property the number of samples required drops to N = O(n log(1/(eps*delta))).
In many applications of chance constrained problems the distribution P is not completely known, i.e. the distribution is ambiguous. The natural
constraint to impose in this setting is the ambiguous chance constraint:
max_{P in M} {P(f(x,Y) > 0)} < eps, where M is an uncertainty set of distributions. In this talk we discuss how to extend many results known for chance constrained problems to this more general setting. Robust deterministic optimization naturally arises
in these extensions by way of the Strassen-Dudley representation theorem.
(Joint work with Emre Erdogan)
Brief Bio: Garud Iyengar received his PhD in Electrical Engineering from Stanford University in 1998. Since then he has been with the Industrial Engineering and Operations Research (IEOR) department at Columbia
University where he is currently an Associate Professor.