Approximating inequality systems within probability functions: studying implications for problems and consistency of first-order information

Wim Van Ackooij, Pedro Pérez-Aros, Diego Morales-Poblete, David Villacís

September 2025

Abstract

In this work, we are concerned with the study of optimization problems featuring so-called probability or chance constraints. Probability constraints measure the level of satisfaction of an underlying random inequality system and ensure that this level is high enough. Such an underlying inequality system could be expressed by an abstractly known or perhaps costly to evaluate function. While perhaps then it is possible to theoretically investigate first-order properties of the probability function or perhaps even make numerical use of them, the computational cost could be important. We suggest an inner approximation framework, thus providing a range of approximate probability functions. We establish that this sequence converges hypographically and continuously to the nominal probability function. Furthermore, we examine the convergence of (sub-)gradients, provide a suitable formula for the (sub-)gradients of the approximate probability functions, and prove that under mild assumptions, this sequence of subgradients converges to a subgradient of the original probability function. We also examine the consistency of optimization problems wherein the probability constraint is replaced by its approximation. Finally, we illustrate our results with numerical applications and propose several algorithms based on our findings.

Bibtex

@misc{villacis2017photographicdatasetplayingcards,
  title         = {Approximating inequality systems within probability functions: studying implications for problems and consistency of first-order information},
  author        = {Wim Van Ackooij, Pedro Pérez-Aros, Diego Morales-Poblete, David Villacís},
  year          = {2025},
  eprint        = {1701.07354},
  archiveprefix = {OptOnline},
  primaryclass  = {math.OC},
  url           = {https://optimization-online.org/?p=31921},
  abstract      = {In this work, we are concerned with the study of optimization problems featuring so-called probability or chance constraints. Probability constraints measure the level of satisfaction of an underlying random inequality system and ensure that this level is high enough. Such an underlying inequality system could be expressed by an abstractly known or perhaps costly to evaluate function. While perhaps then it is possible to theoretically investigate first-order properties of the probability function or perhaps even make numerical use of them, the computational cost could be important. We suggest an inner approximation framework, thus providing a range of approximate probability functions. We establish that this sequence converges hypographically and continuously to the nominal probability function. Furthermore, we examine the convergence of (sub-)gradients, provide a suitable formula for the (sub-)gradients of the approximate probability functions, and prove that under mild assumptions, this sequence of subgradients converges to a subgradient of the original probability function. We also examine the consistency of optimization problems wherein the probability constraint is replaced by its approximation. Finally, we illustrate our results with numerical applications and propose several algorithms based on our findings.}
}