*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
In this talk, we consider a class of trust region algorithms to solve
stochastic programming problems that satisfy the following properties:
(1) The objective function can only be evaluated with some error,
for example by and at high computational cost.
b) The error can be decreased with more computational effort.
c) The higher order derivatives of the objective function are unavailable.
Though our work is motivated from problems arising in stochastic simulation
optimization (Eg.Revenue Management), such optimization problems also commonly arise in
engineering design (Eg. helicopter rotor blade design).
Due to the high cost of evaluating the objective function, our aim is
to develop convergent algorithms that can solve such problems
while requiring the fewest possible number of evaluations of the
objective function.
When the objective function and its gradient can be evaluated easily
and exactly, a typical trust region algorithm works as follows.
At each iteration, we first construct a polynomial model function, typically a
truncated Taylor series expansion of the objective at the
current iterate, that approximates the objective function
in a certain neighborhood of the current iterate called
the trust region. We then optimize this model function within the trust
region. Depending on whether the resulting
optimal solution has a lower objective function value or not, we
either set this as the next iterate or conversely, alter the trust
region size and/or the model function appropriately and try again.
Since in our case, the objective function and its higher order
derivatives cannot be evaluated exactly, we cannot use a Taylor series
based model function. Accordingly, we propose
alternative linear or quadratic polynomial
model functions that are constructed by linear regression using only
sample average approximations of the objective, evaluated at points
in the neighborhood of the current iterate. We describe the various
changes that have to be made to the traditional trust region framework
in order to successfully construct and control the accuracy of such regression
based model functions and present the convergence theory for the
resulting modified trust region algorithm. Finally, we provide
computational results for such an algorithm run on selected problems
from the CUTE test set.