Chance Constrained Optimization#
Part of a series: Optimization Under Uncertainty.
Follow reading here
Description#
Chance constrained optimization is an approach to solve optimization problems under uncertainty where the uncertainty is also present in to the inequality constraints. We need a formulation on how to restrict values and processes described by random variables in a meaningful way. The idea of the method is to require that the constraints are satisfied for sufficiently many realizations of the random variables, i.e., with a high enough probability.
Doing this is reasonable due to two aspects:
Soft constraints: When applying optimization to real-life problems we often have the case that constraints do not actually need to be adhered to at any cost all the time. They rather allow for small violations and are known as soft constraints. To give an example: in power systems engineering, there are constraints that limit how much power can be transferred via a power transmission line. In reality however, it is possible to overload these lines for a short period of time without damaging them. Hence, it is acceptable to violate the constraint on this line flow in rare occasions.
Long tails: Many common probability distributions in modeling have long tails, such as the normal or exponential distribution. Random variables that follow these distributions take on values above or below a certain threshold with such a small probability that one can assume the possibility of these scenarios to happen to be negligible. Certainly, a mindful consideration between this simplifying assumption and the criticality of the constraint has to be made.
With this method we can restrict the feasible region of a stochastic optimization problem such that the confidence level of the solution to be feasible is high enough.
Formulation#
Consider an optimization problem under uncertainty
where \(f\) is the objective function, \(g\) denotes the equality constraints and \(h\) the inequality constraints. The (deterministic) decision variables are given by vector \(x\), while \(\xi\) is the vector containing all uncertainties in terms of random variables.
We now introduce chance constraints to restate the inequality constraints as
With this the requirement is changed such that the constraints \(h\) have to be satisfied with a probability \(P\) of at least \(1 - \epsilon_p\) in order for a solution to be feasible. The smaller we set \(\epsilon_p\) the more we restrict the set of feasible solutions which provides a better confidentiality for the results but makes it harder to compute them. Therefore, \(\epsilon_p\) serves as a design parameter for a trade-off between security and computational cost. It is often referred to as violation parameter. Common values are \(\epsilon_p \in \{0.1, 0.05, 0.01\}\).
Types of chance constraints#
There are different ways chance constraints can be stated. The formulations have certain benefits and drawbacks regarding solution precision and problem feasibility.
Single chance constraints are the direct reformulation of each of the original constraints \(h_1, h_2, \dots, h_n\):
where each chance constraint \(h_i\) gets its own tuning parameter \(\epsilon_i\).
The advantage of this formulation is that each of the constraints can get assigned a probability according to the criticality of these constraints being adhered to. Furthermore, in case of a violation one directly has information on which bound was not satisfied. However, single chance constraints only enforce that each constraint is satisfied individually at a certain confidentiality level but not all constraints as a whole.
Joint chance constraints on the other hand require the conjunction of all inequality constraints to be met by a certain probability
for a joint parameter \(\epsilon_p\).
Their advantage is that they assure the satisfaction of all constraint at a certain level. The reasoning behind this is the following: when we think of the deterministic constraints we also expect all of them to hold with the same probability of 1, all at once. This leads to a more conservative problem formulation. The problem, however, is that these formulations tend to be really hard to solve even numerically and are hence can only be used in very simple or special scenarios in practice.
Solution Approaches#
Chance constrained problems in general tend to be hard to solve. This is because the computation of the integral for
, where \(f_\Xi\) is the probability density function of \(\xi\), is often times difficult or even impossible. This is especially true if the model creates a non-linear relationship between input and output uncertainties. Hence, the fallback are approximation strategies.
Some of the most prevalent methods are:
Robust / Scenario-based optimization: Transfer the problem into a deterministic one. For this, we consider the worst case problem, i.e., require the chance constraints \(h(x,\xi) \geq 0\) to be satisfied for as many realizations \(\xi^{(i)} \in \Omega, i = 1, \dots, N\) as possible, where \(\Omega\) is the set of all realizations. This is done by replacing the probabilistic constraint with new (deterministic) constraints for each sample: \(h(x,\xi^{(i)}) \geq 0\). By adding enough constraints the new problem solution is likely to be a solution in the feasible region of the original problem as well. The number of samples required depends on the size of the violation parameter \(\epsilon_p\). A lower bound can be given mathematically [Calafiore and Campi, 2005]. In case the violation parameter \(\epsilon_p\) is very small, however, a large amount of samples is required to cover the worst case.
Back-mapping: Instead of the direct computation of the probability density, we can derive an equivalent representation. This is done by finding a monotone relation between input and output uncertainty. By performing integration in the probability subspace of the input uncertainties and then transforming the result back one can more easily compute the required integrals. These monotone relations, however, can be demanding to find, if they exist at all. [Wendt et al., 2002]
Polynomial chaos expansion: In this more recent approach the uncertainties in the model are replaced by deterministic representations, similar to a Fourier analysis of random variables. By doing so the chance constraints are also transformed into a deterministic version. The benefit of this specific representation is that it allows one to directly obtain the moments of the represented random variables. No further relaxations of the optimization problem are necessary here. A drawback of this method is that the deterministic models can blow up rapidly in the number of model uncertainties. [Mühlpfordt et al., 2018], [Mühlpfordt et al., 2019]
Applications#
Some areas in which stochastic optimization problems are solved with the help of chance constraints are:
Power systems engineering: Computation of optimal power generation and distribution under the probabilistic influence of renewable energy sources. Chance constraints are used for example to describe the engineering limits on the power transmission lines. A solution is feasible if the transmission lines are not overloaded “most of the time” [Mühlpfordt et al., 2019].
Financial risk management: Unpredictable and volatile market situations lead to many uncertainties when investing. When aiming to maximize revenue, chance constraints can be used in order to require probabilities for certain outcomes of investments. They can be used redundantly to common risk metrics like value at risk.
Unmanned autonomous vehicle navigation: Used for optimal navigation and obstacle avoidance. Uncertainty can stem from random obstacles or sensor measurement errors. With only estimations available for the distributions of the objects and errors one can use chance constraints in order to require a certain confidence level to avoid collisions when navigating through a certain region.
Literature#
- CC05
Giuseppe Calafiore and M. C. Campi. Uncertain convex programs: Randomized solutions and confidence levels. Mathematical Programming, 102(1):25–46, jan 2005. doi:10.1007/S10107-003-0499-Y.
- MuhlpfordtFH18
Tillmann Mühlpfordt, Timm Faulwasser, and Veit Hagenmeyer. A generalized framework for chance-constrained optimal power flow. Sustainable Energy, Grids and Networks, 16:231–242, dec 2018. doi:10.1016/j.segan.2018.08.002.
- MuhlpfordtRH+19(1,2)
Tillmann Mühlpfordt, Line Roald, Veit Hagenmeyer, Timm Faulwasser, and Sidhant Misra. Chance-Constrained AC Optimal Power Flow: A Polynomial Chaos Approach. IEEE TRANSACTIONS ON POWER SYSTEMS, 34(6):4806 – 4816, 2019. doi:10.1109/TPWRS.2019.2918363.
- RMCA15
Line Roald, Sidhant Misra, Michael Chertkov, and Göran Andersson. Optimal Power Flow with Weighted Chance Constraints and General Policies for Generation Control. March 2015. URL: http://arxiv.org/abs/1504.00057, arXiv:1504.00057v1.
- WLW02
Moritz Wendt, Pu Li, and Günter Wozny. Nonlinear Chance-Constrained Process Optimization under Uncertainty. Industrial and Engineering Chemistry Research, 41(15):3621–3629, jul 2002. doi:10.1021/IE010649S.
Contributor#
Sonja Germscheid