Commit 02772f06 authored by Benoit Urruty's avatar Benoit Urruty

inversion1

parent 490b44d7
......@@ -7,11 +7,11 @@
We need to interpolate the data on the mesh. These data contain :
- The bed topography, the thickness (both from BedMachine)
- The surface mass balance (SMB from a the MAR model)
- The mean viscosity which correspond to a first guess on $\mu$ (from Paterson)
- The surface mass balance (SMB from the MAR model)
- The mean viscosity which corresponds to a first guess on $\mu$ (from Paterson)
- The surface velocity (from MEaSUREs)
The models is now able to compute others variables.
The model is now able to compute other variables.
- The surface topography
- The groundedmask
......@@ -90,7 +90,7 @@ The value of the velocity $u$ and $v$ are exported to computed in the cost funct
The cost function is the comparison of the compute surface velocity
and the observed one. We compute here the integer of all the difference
$$
J = \int{\frac{1}{2}((u_{SSA} - u_{Obs})^2+(v_{SSA} - v_{Obs})^2)} d\Omega
J = \sum_1^{Nobs}{\frac{1}{2}(U_{SSA} - U_{Obs})^2}
$$
......@@ -98,32 +98,35 @@ $$
Derivative of the cost function
$$
velocityb = \int{((u_{SSA} - u_{Obs})\partial u+(v_{SSA} - v_{Obs})\partial v)} d\Omega
velocityb = \sum_1^{Nobs}{(U_{SSA} - U_{Obs})}
$$
### The adjoint method
### The adjoint method (adjoint linear solver)
The adjoint problem is a numerical method to compute the gradient in a numerical optimization problem. The adjoint for a linear system **A**x = F is define as :
$$
\boldsymbol{A}^T b = xb
$$
The solver is taking as input the variable $Velocityb$. This variable contains the sensitivity of the cost function $xb$.
### The gradient
We need to compute the gradient for each parameters we have to optimize. Theoretically, we have the right solution when the gradient is equal to zero but we never obtains this value. So we define a lower limit which is define a the gradient is enough low to have a good result.
We need to compute the gradient for each parameter we have to optimize. Theoretically, we have the right solution when the gradient is equal to zero but we never obtain this value. So we define a lower limit which defines the gradient is enough low to consider the result as the best we can obtain.
$$
\nabla J= \left\{ \begin{array}{ll}
\frac{\partial J}{\partial Beta}=\frac{J(i)-J(i-1)}{Beta(i)-Beta(i-1)}\\
\frac{\partial J}{\partial Eta}=\frac{J(i)-J(i-1)}{Eta(i)-Eta(i-1)}
\end{array}
\right\}
\frac{\partial J}{\partial \beta}=\frac{J(i)-J(i-1)}{\beta(i)-\beta(i-1)}\\
\frac{\partial J}{\partial \eta}=\frac{J(i)-J(i-1)}{\eta(i)-\eta(i-1)}
$$
The output from this solver are the nodal derivative of the friction parameter and the mean viscosity.
### The regularization terms
......@@ -134,6 +137,14 @@ J_{reg} = \int_{\Omega} 0.5 (|dV/dx|)^2 d\Omega
\\with\ V \ the\ nodal\ variable
$$
#### regularization of Beta
#### regularization of Eta
### The optimization M1QN3
The optimization will take the gradient and try to found the best choice by doing simulations to find the best direction and will pass to the next iteration.
\ No newline at end of file
For each iteration, we compute the cost function $J$ and its gradient $\frac{\partial J}{\partial}$ which are the input for M1QN3. The solver will provide a new estimate of the state. An iterations is composed by simulations of M1QN3 which try to found the best descent gradient.
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment