*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
One approach to process design with uncertain parameters is to formulate a stochastic MINLP. When there are many uncertain parameters, the number of samples becomes unmanageably large and computing the solution to the MINLP can be difficult and very time consuming. In this talk, two new algorithms (the optimality gap method (OGM) and the confidence level method (CLM)) will be presented for solving convex stochastic MINLPs. At each iteration, the sample average approximation method is applied to the NLP sub-problem and MILP master problem. A smaller sample size problem is solved multiple times with different batches of i.i.d. samples to make decisions and a larger sample size problem (with continuous/discrete decision variables fixed) is solved to re-evaluate the objective values. In the first algorithm, the sample sizes are iteratively increased until the optimality gap intervals of the upper and lower bound are within a pre-specified tolerance. Instead of requiring a small optimality gap, the second algorithm uses tight bounds for comparing the objective values of NLP sub-problems and weak bounds for cutting off solutions in the MILP master problems, hence the confidence of finding the optimal discrete solution can be adjusted by the parameter used to tighten and weaken the bounds. The case studies show that the algorithms can significantly reduce the computational time required to find a solution with a given degree of confidence.