Showing posts with label Model Adaptivity. Show all posts
Showing posts with label Model Adaptivity. Show all posts

Thursday, 12 November 2020

The Bayesian Finite Element Method: blending machine learning and classical algorithms to compute ''thick" solutions to partial differential equations

 

 
Partial Differential Equation-based numerical simulators are ubiquitous in engineering. However, the scaling of PDE solvers with increasing spatial, temporal and parametric resolutions is rather poor, leading to computational costs that are exponentially increasing with the complexity of the physical system of interest. As a consequence, discretisation schemes are often coarser than desired, in a pragmatic push towards applications such as physics-based modelling in interaction with reality, aka digital twins.

A way forward is a consistent treatment of all sources of uncertainty and a subsequently approach model refinement as a unified, uncertainty-driven task. To take modelling error into account, classical Bayesian model calibration and state estimation methodologies treat model parameters and model outputs as random variables, which are then conditioned to data in order to yield posterior distributions with appropriate credible intervals. However, the traditional way to quantify discretisation  errors is through deterministic numerical analysis, yielding point estimates or bounds, without distribution, making these approaches incompatible with a Bayesian treatment of model uncertainty.

Recently, significant developments have been made in the area of probabilistic solvers for PDEs. The idea is to formulate discretisation schemes as Bayesian estimation problems, yielding not a single parametrised/spatio/temporal field but a distribution of such fields. Most methods use Gaussian Processes as fundamental building block. The basic idea is to condition a Gaussian random field to satisfy the PDE at particular points of the computational domain. This gives rise to probabilistic variants of meshless methods traditionally used to solve PDEs. To date however, such approaches are not available for finite element solvers, which are typically based on integral formulations over arbitrary simplexes, leading to analytically intractable integrals.

We propose what we believe is the first probabilistic finite element methodology and apply it to steady heat diffusion. It is based on the definition of a discrete Gaussian prior over a p-refined finite element space. This prior is conditioned to satisfy the PDE weakly, using the non-refined finite element space to generate a linear observation operator. The Hyperparameters of the Gaussian process are optimised using maximum likelihood. We also provide an efficient solver based on Monte- Carlo sampling of the analytical posterior, coupled with an approximate multigrid sampler for the p- refined gaussian prior. We show that this sampler ensures that the overall cost of the methodology is of the order the p-refined deterministic FE technology, whilst delivering valuable probability distributions for the continuous solution to the PDE system.


 

Saturday, 7 January 2017

Certified Defeaturing of CAD models

 
 
Defeaturing is routinely used by engineering to simplify their numerical analyses. Typically, small geometrical features are removed a priori from the CAD model, leading to more affordable simulation stages. We develop a method to estimate the error made when ignoring such features. The error on the chosen quantity of interest is bounded from above and below, using dedicated methodological derivations that find their roots in convex analysis. Deriving the bounds only necessitates the availability of the defeatured solution, and some affordable post-processing of this solution in the vicinity of the feature.

This is the outcome of the PhD work of Dr Rahimi, supervised by Dr Kerfriden, Dr Langbein and Prof. Martin in Cardiff University.
 


Tuesday, 14 April 2015

Multiscale approximation of stochastic PDEs with fastly varying coefficients: certification of accuracy through error bounding

The goal of this research is to certify the accuracy of homogenisation schemes in the sense of engineering quantities of interest.

Homogenisation has been used for centuries to upscale some knowledge of the physics of heterogeneous materials to the engineering scale, where predictions are required. Homogenisation is typically a limit result, that delivers good predictions when the typical length-scale of the material heterogeneities is small compared to the engineering scale (i.e. the scales are well-separated). Broadly speaking, homogenisation fails in boundary regions that are dominated by stress concentrations (around sharp joints, holes, at the interface between different materials, ...).

We investigate here a methodology to quantify the error that is made when using homogenisation when scale separability is not satisfied. We started from the modelling error methodology developed at ICES Texas in the early 2000's. The approach proposed by this group is to bound the error that is made on engineering quantities of interest (QoI) when using an arbitrary homogenisation scheme as a approximation of the intractable, fine-scale heterogeneous problem. This was done by extending the equilibrated residual method (classically used to quantify discretisation errors) to the context of modelling error and combining it the adjoint methodology to convert error measures in energy norm into errors in QoI. The method was shown to deliver guaranteed error bounds, without requiring to solve the the underlying heterogeneous problem. However, the heterogeneous problem needs to be constructed (but not solved) in order to compute the bounds, which, in the case of large composite structures, remains a computational bottleneck.




We tackle this issue by representing the microscale heterogeneities as a random field. In addition to the fact that this is a realistic modelling approach, in the sense that we rarely know where the heterogeneities of composite materials are precisely located, we are able to completely alleviate the need for meshing and assembling the fine-scale heterogeneous problem. We therefore retrieve a numerical separation of scales for the computation of modelling error bounds.


We showed that this methodology could be applied to provide bounds for the stochastic homogenisation error made on both the first and second statistical moments of engineering QoI. These bounds can be implemented within a couple of hours in any finite element code. They can be interpreted as an extension of the classical Reuss-Voigt bounds, but without any a priori requirement in terms of scale separability.

All the numerical results have been obtained by Daniel Alves Paladim in the context of his PhD thesis. We have largely benefitted from the advices of Mathilde Chevreuil, University of Nantes, regarding the stochastic aspects of this work.

Monday, 15 April 2013

Reduced order modelling for meso-scale material optimisation


The goal of this project is to provide affordable numerical methods for the optimisation of composite structures with randomly distributed heterogeneities. We assume that the elastic constants of inclusions can be chosen within a range of admissible values. Depending on the load in the structure, an optimal set of these parameters can be found. We investigate the reduction of numerical costs associated to the calculation of the gradient of the objective function with respect to the material characteristics. A classical homogenisation technique is used, and associated with certified reduced order modelling.




The reduction of the RVE problem relies on the Galerkin-POD. An upper and a lower bound for the reduced order modelling error are developed. They upper bounding technique makes use of the concept of the Constitutive Relation Error (see below).




Kerfriden, P., Ródenas, J. J. and Bordas, S. P.-A. (2014), Certification of projection-based reduced order modelling in computational homogenisation by the constitutive relation error. Int. J. Numer. Meth. Engng, 97: 395–422. doi:10.1002/nme.4588