Sample Average Approximation Methods for stochastic MINLP's

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Friday September 12, 2003
      11:00 am - 11:59 pm
  • Location: ISyE Groseclose Rm 304
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
Barbara Christopher
Industrial and Systems Engineering
Contact Barbara Christopher
404.385.3102
Summaries

Summary Sentence: Sample Average Approximation Methods for stochastic MINLP's

Full Summary: Sample Average Approximation Methods for stochastic MINLP's

One approach to process design with uncertain parameters is to formulate a stochastic MINLP. When there are many uncertain parameters, the number of samples becomes unmanageably large and computing the solution to the MINLP can be difficult and very time consuming. In this talk, two new algorithms (the optimality gap method (OGM) and the confidence level method (CLM)) will be presented for solving convex stochastic MINLPs. At each iteration, the sample average approximation method is applied to the NLP sub-problem and MILP master problem. A smaller sample size problem is solved multiple times with different batches of i.i.d. samples to make decisions and a larger sample size problem (with continuous/discrete decision variables fixed) is solved to re-evaluate the objective values. In the first algorithm, the sample sizes are iteratively increased until the optimality gap intervals of the upper and lower bound are within a pre-specified tolerance. Instead of requiring a small optimality gap, the second algorithm uses tight bounds for comparing the objective values of NLP sub-problems and weak bounds for cutting off solutions in the MILP master problems, hence the confidence of finding the optimal discrete solution can be adjusted by the parameter used to tighten and weaken the bounds. The case studies show that the algorithms can significantly reduce the computational time required to find a solution with a given degree of confidence.

Additional Information

In Campus Calendar
No
Groups

School of Industrial and Systems Engineering (ISYE)

Invited Audience
No audiences were selected.
Categories
Seminar/Lecture/Colloquium
Keywords
No keywords were submitted.
Status
  • Created By: Barbara Christopher
  • Workflow Status: Published
  • Created On: Oct 8, 2010 - 7:42am
  • Last Updated: Oct 7, 2016 - 9:52pm