Skip to content

Conversation

@JHopeCollins
Copy link
Member

@JHopeCollins JHopeCollins commented Nov 10, 2025

Depends on #4773 and firedrakeproject/fiat#201.

White and correlated noise generation from #3799, plus extra methods to calculate action and inverse of autoregressive covariance operators and use it for a weighted norm.

White noise generation

See Croci et al 2018.

White noise samples calculating $M^{1/2}z$ where $z_{i}\sim\mathcal{N}(0, I)$ and $M$ is the mass matrix.
$M^{1/2}$ can be constructed by assembling the Cholesky factor of the element-wise mass matrices.

There are two implementations of this:

  1. hijack the pyop2 and loopy assemble kernels to return the cholesky factor of the element mass matrix rather than the actual mass matrix. This is the implementation from JDBetteridge/randomfields rebased #3799
  2. Build a discontinuous superspace and use PETSc to calculate the global cholesky factor of that mass matrix, which is block diagonal over the elements. This uses the L2Cholesky class from Mesh independent optimization trick #4575. There is also a specialisation of this method for VertexOnlyMesh.

Autoregressive covariance operator

See Mirouze & Weaver, 2010

A covariance operator $B$ with an m-th autoregressive function kernel can be calculated using m Backward Euler steps of a diffusion operator, where the diffusion coefficient is specified by the correlation lengthscale.

If $M$ is the mass matrix, $K$ is the matrix for a single Backward Euler step, and $\lambda$ is a normalisation factor, then the m-th order covariance operator with variance $\sigma^2$ is:

$$B: V^{*} \to V = \sigma\lambda((K^{-1}M)^{m}M^{-1})\lambda\sigma$$

This formulation leads to an efficient implementation for $B^{-1}$ (using the action of $K$) and for $B^{1/2}$ by taking only m/2 steps of the diffusion operator. This can be used to calculate weighted norms $||x||_{B^{-1}}$ and sample from $\mathcal{N}(0,B)$.

The CovarianceOperatorBase class provides an abstract interface for other covariance operators, which must implement:

  1. sample to return $w\in\mathcal{N}(0,B)$
  2. apply_action(x) to apply $Bx$
  3. apply_inverse(x) to apply $^{-1}x$
  4. norm(x) to return $||x||_{B^{-1}} = x^{T}B^{-1}x$

PETSc python contexts

If you use the weighted norm $||x||_{B^{-1}}$ in the functional for an adjoint calculation, then often $B$ is a good preconditioner for the 2nd order Hessian. This PR provides a python Mat context and corresponding python PC context for children of CovarianceOperatorBase.

Co-authored-by: Jack Betteridge <j.betteridge@imperial.ac.uk>
@jrmaddison
Copy link
Contributor

L2Cholesky in #4575 might be useful here, e.g. for parallel

@JHopeCollins
Copy link
Member Author

L2Cholesky in #4575 might be useful here, e.g. for parallel

Of course, you can just grab the local part of the matrix because it's block diagonal so you don't actually need a parallel Cholesky factor.

I've added that PR to the Firedrake meeting agenda this afternoon.

@JHopeCollins JHopeCollins marked this pull request as ready for review December 9, 2025 11:55


def diffusion_form(u, v, kappa: Constant | Function,
formulation: AutoregressiveCovariance.DiffusionForm):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unit test this function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants