# Adjoint Sensitivity Analysis in UQ [DRAFT]#

Part of a series: Sensitivity analysis.

Follow reading here

Sensitivity analysis plays a crucial role in estimating the variations in the output of computational models when faced with uncertainties in input data. In uncertainty quantification (UQ) for numerical models that incorporate uncertainties in initial conditions, boundary conditions, and/or model parameters, derivative-based methods are particularly useful. Let’s consider a simple Ordinary Differential Equation (ODE) problem where initial conditions are uncertain:

Here, the goal is to compute the expected value \(E[u]\), variance \(V[u]\) of quantities of interest (QoIs) such as the expected value at a final time \(T\) is \(R(u(Z)) = E[u(T,Z)]\) and the reduced QoI as \(\tilde{R}(Z)\).The sensitivities are the partial derivatives of QoI’s with respect to the input parameters and are essential for this task.

The main goal in uncertainty quantification is to compute expected value and variance. In order to compute them, assume that \(Z\) has the pdf \(f\). Then, the expected value can be approximated as follows:

Linearizing around \(Z^0\) yields, \(Z^0 = E [Z]\), then \(E[\tilde{R}(Z)] = \tilde{R}(E[Z])\). Furthermore, the variance calculation is done as follows,

The last expression is know as the *sandwich rule*.

The question of computing the sensitivities is still unanswered. Let’s perform a notational change of \(z\) for the uncertain parameters instead of \(Z\) (here interest is in calculating derivatives but not in randomness). Moreover, let’s define

where \(z \in \mathbb{R}^d\). In order to get \(\frac{\partial }{\partial z} u(z)\), \eqref{DSA_1} is differentiated,

The dimensions of all the quantities are

and also the problem, \(\mathcal{E}(u,z) \in \mathbb{R}^n\). Upon making assumptions of *existence* and *uniqueness* and also *invertiblity of \(\frac{\partial }{\partial u} \mathcal{E}(u(z),z)\)*, allows to write

In order to compute \(\frac{\partial}{\partial}u\), the linear system

has to be solved. Thus,

In the last step \(d\) linear systems of equations of size \(n \times n\) have to be solved. For small \(d\) solving this system can be not so difficult and a method for this task can be called as a *direct sensitivity method*. However, as \(d\) increases solving the system becomes computationally expensive. In this case a method called *ajoint sensitivity method* can be introduced. Let’s consider \(S^T\):

Here, one needs to solve:

and

In this new formulation only \(q\) linear systems of size \(n \times n\) have to be solved and a \(q\) is usually quite small then \(d\), the adjoint sensitivity method saves a huge amount of computational cost for a very large \(d\). This technique is also called *back propagation* in the field of artificial neural networks (NN).

The formulation can be further simplified using a Lagrangian:

This can be further simplified as,

The lagrangian formulation can then be written in the form of QoI as,

The partial derivatives of this expression with respect to \(\lambda\), \(u\) and \(z\) yields an expression for adjoint sensitivity as,

This idea of adjoint sensitivity method can be implemented not just only at finite dimensional systems but can also extended for ODEs and PDEs. (For further explanation see Prof. Martin Frank’s lecture notes).