Stochastic Programming

Stochastic Programming#

Part of a series: Optimization Under Uncertainty.

Follow reading here

Mathematical programming or optimization deals with the challenges of computing the optimum of an objective and the corresponding decision variables [Locatelli and Schoen, 2013]. For example, a problem of the minimization of a function \(f(x)\) and the determination of a minimizer \({x^*}\) would be posed as:

\[ \begin{equation} \begin{aligned} x^*= \underset{x}{\text{arg min }} f(x) \end{aligned} \end{equation} \]

The subfield of stochastic programming specializes on optimization problems with uncertain parameters [Birge and Louveaux, 2011]. With an uncertain parameter \(p\) following a probability distribution, the function \(f(x,p)\) becomes uncertain and effectively represents a distribution of possible values. Stochastic programs deal with the uncertainty by optimizing the expectation of the objective function:

\[ \begin{equation} \begin{aligned} x^*=\underset{x}{\text{arg min }} E_p[ f(x,p)] \end{aligned} \end{equation} \]

The possible realizations of an uncertain parameter \(p\) follow an underlying probability distribution. As this distribution is often unknown in pratice, a discretized probability distirbution is considered such that a set of generated or sampled scenarios represent the uncertainty. A scenario represents a possible realization of the uncertain parameter. Through the scenarios, the problem can be reformulated as in the equation below. Here, \(p_i\) is a scenario with its probability \(\pi_i\) from a set of \(S\) scenarios.

\[ \begin{equation} \begin{aligned} x^*=\underset{x}{\text{arg min }} E_p[ f(x,p)] \approx \underset{x}{\text{arg min }} \sum_{i=1}^S \pi_i f(x,p_i) \end{aligned} \end{equation} \]

A large set of scenarios usually can capture the uncertainty better than a small set of scenarios. As the problem size drastically increases with the number of scenarios, stochastic programs suffer from the curse of dimensionality. Therefore, specialized solution methods, such as Bender’s decomposition [Birge and Louveaux, 2011], aim to keep the problem computationally tractable.


\(\bullet\) It is possible to include risk measures into the stochastic program, e.g., with bi-objective optimization, additional constraints, or additional terms in the objective, if the sole consideration of the optimal expected value of the function is not sufficient.

\(\bullet\) The metric Value of Stochastic Solution evaluates the benefit of using a stochastic programming approach over an optimization that neglects uncertainty (i.e., using a single average-valued scenario instead). The Expected Value of Perfect Information shows the benefit of perfect foresight of the uncertainty. For more detailed information on how to calculate the Value of Stochastic Solution and the Expected Value of Perfect Information, we refer to [Birge and Louveaux, 2011].

\(\bullet\) Stochastic programming should not be confused with stochastic optimization, which is a heuristic optimization method based on evaluations of the objective function.



John R. Birge and François Louveaux. Introduction to Stochastic Programming. Springer Series in Operations Research and Financial Engineering. Springer New York, 2011. ISBN 978-1-4614-0236-7. URL:, doi:10.1007/978-1-4614-0237-4.


Marco Locatelli and Fabio Schoen. Global optimization : theory, algorithms, and applications. MOS-SIAM series on optimization. SIAM, Philadelphia, PA, 2013. ISBN 9781611972665.


Sonja Germscheid


Manuel Dahmen