Stochastic gradient descent for risk optimization
This paper presents an approach for the use of stochastic gradient descent methods for the solution of risk optimization problems. The first challenge is to avoid the high-cost evaluation of the failure probability and its gradient at each iteration of the optimization process. We propose here that it is accomplished by employing a stochastic gradient descent algorithm for the minimization of the Chernoff bound of the limit state function associated with the probabilistic constraint. The employed stochastic gradient descent algorithm, the Adam algorithm, is a robust method used in machine learning training. A numerical example is presented to illustrate the advantages and potential drawbacks of the proposed approach.
Category
🤖
Tech