*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
We consider a class of optimization problems that satisfies the following properties a) The objective function can only be evaluated with some error and at high computational cost. b) The error can be decreased with more compuational effort. c) The Higher order derivatives of the objective function are unavailable. Such problems commonly arise in engineering design(Eg. helipcopter rotor blade design), simulation optimization(Eg. Revenue management) etc. Our aim is to develop convergent algorithms that can solve such problems while requiring the fewest possible number of objective function evaluations.
In order to do so, we first develop a general framework of convergence for optimization algorithms. Using this framework, we can show the convergence of traditional non-linear programming algorithms that have been suitably modified to use approximations of the objective function and its gradient. Then, we present one particular scheme for approximating the gradient and hessian of the objective function using linear regression. Finally, we describe a trust region algorithm that uses linear regression to form a linear or quadratic model of the objective function and provide computational results for such an algorithm run on the problems from the CUTE test set.