Showing posts with label Multiscale Modelling. Show all posts
Showing posts with label Multiscale Modelling. Show all posts

Monday, 10 January 2022

Multiscale stress surrogates via probabilistic graph-based geometric deep learning

https://arxiv.org/abs/2205.06562


 

Fast stress predictions in multiscale materials & structures graph-based probabilistic geometric deep learning with online physics-based corrections

https://618.euromech.org/slides/

Wednesday, 8 November 2017

Computational homogenisation in FEniCS







Simulations by Dr P. Kerfriden
Finite element software: https://fenicsproject.org/
Meshing software: http://gmsh.info/
Visualisation tool: https://www.paraview.org/

Tuesday, 14 April 2015

Multiscale approximation of stochastic PDEs with fastly varying coefficients: certification of accuracy through error bounding

The goal of this research is to certify the accuracy of homogenisation schemes in the sense of engineering quantities of interest.

Homogenisation has been used for centuries to upscale some knowledge of the physics of heterogeneous materials to the engineering scale, where predictions are required. Homogenisation is typically a limit result, that delivers good predictions when the typical length-scale of the material heterogeneities is small compared to the engineering scale (i.e. the scales are well-separated). Broadly speaking, homogenisation fails in boundary regions that are dominated by stress concentrations (around sharp joints, holes, at the interface between different materials, ...).

We investigate here a methodology to quantify the error that is made when using homogenisation when scale separability is not satisfied. We started from the modelling error methodology developed at ICES Texas in the early 2000's. The approach proposed by this group is to bound the error that is made on engineering quantities of interest (QoI) when using an arbitrary homogenisation scheme as a approximation of the intractable, fine-scale heterogeneous problem. This was done by extending the equilibrated residual method (classically used to quantify discretisation errors) to the context of modelling error and combining it the adjoint methodology to convert error measures in energy norm into errors in QoI. The method was shown to deliver guaranteed error bounds, without requiring to solve the the underlying heterogeneous problem. However, the heterogeneous problem needs to be constructed (but not solved) in order to compute the bounds, which, in the case of large composite structures, remains a computational bottleneck.




We tackle this issue by representing the microscale heterogeneities as a random field. In addition to the fact that this is a realistic modelling approach, in the sense that we rarely know where the heterogeneities of composite materials are precisely located, we are able to completely alleviate the need for meshing and assembling the fine-scale heterogeneous problem. We therefore retrieve a numerical separation of scales for the computation of modelling error bounds.


We showed that this methodology could be applied to provide bounds for the stochastic homogenisation error made on both the first and second statistical moments of engineering QoI. These bounds can be implemented within a couple of hours in any finite element code. They can be interpreted as an extension of the classical Reuss-Voigt bounds, but without any a priori requirement in terms of scale separability.

All the numerical results have been obtained by Daniel Alves Paladim in the context of his PhD thesis. We have largely benefitted from the advices of Mathilde Chevreuil, University of Nantes, regarding the stochastic aspects of this work.

Monday, 12 January 2015

Bayesian optimisation for the selection of representative load paths in computational material testing (best UK Ph.D. in Comp. Mech. for Dr Olivier Goury)

The aim of computational material testing is to obtain the relationship between forces and extensions of a complex material through simulations, given some information about its microstructure. This is conceptually similar to pulling on a specimen experimentally, and reporting the force that needs to be applied to obtain an overall deformation of the material. Of course, replacing this costly experimental setup by simulations allows practitioners to investigate the use of new materials before manufacturing them, to play with the microstructure (e.g. varying the fiber content in advanced fiber-reinforced concrete) in order to design a material that fits particular engineering needs, or to test and control the reaction of the material when used for the design of a complex, multiscale engineering system.

However, computational material testing (or computational homogenisation), is very demanding in terms of computational resources. Typically, the material model needs to be solved at every material point of the engineering system of interest, which can be arbitrarily large. 

The solution that we have been investigating to alleviate this issue is to develop efficient Model order Reduction technique. In an offline stage, the material is tested using pre-defined loading scenarios, which leads to a set of particular mechanical states (the snapshot). Then, when a response of the material is required by the engineer for design purposes (i.e. online), it is obtained by performing an optimal interpolation in the space spanned by these pre-computed states, which reduces the overall computational cost by orders of magnitude.
Figure 1: Model order reduction for computational material testing 
The choice of the pre-defined load scenarios is key to the success of this approach. All the states that will need to be predicted should be sufficiently well represented by the states contained in the snapshot. At the present stage of our research, we consider that all potential load cases are equally likely. Although very general, this framework leads to an immense space of mechanical states to explore and represent.

Figure 2: random load-path generation, in 2D.
In 3D the paths belong to an hypercube of dimension 6
One first approach to explore this space of likely states is to choose random load paths (see Figure 2). In the case of damage mechanics, we constrain the random generation to follows states of strictly increasing energy dissipation, thereby constraining the generator to explore non-trivial (i.e. damaged) mechanical states. The reduced order models obtained by applying this simple idea are surprisingly efficient and reliable in our numerical test cases (simple elastic damageable multiphase representative volume element).

Our second approach is more advanced and aims at providing a quasi-optimal family of load-cases. The idea is to locate the load case that is the most incorrectly predicted when using previously generated mechanical states, and iterate until the accuracy of the reduced order model is acceptable. This data-driven approach is very appealing but requires a measure of goodness of prediction over all potential load-cases, which is difficult to (i) define and (ii) obtain at reasonable numerical costs. We have tackled the affordability issue by adopting:
  1. A hierarchical description of the load paths using adaptive shape functions in time: start with proportional loading, then introduce an increasing number of kinks at arbitrary location on the load path (see figure 3)
  2. Figure 3: Automatic path-generation through probabilistic worst-case
    scenario detection (left: proportional loading, right:
    space of load-paths containing a unique kink).
  3. A Bayesian optimisation algorithm to detect the case of worst prediction for a given approximate description of potential load-cases. Precisely, we compute the residual of the governing equations at points chosen quasi-optimally using Gaussian process interpolation and the associated notion of maximum probability of improvement. Subsequently, we establish a relationship between a measure of this residual and the goodness of prediction, through a probabilistic regression technique.
This new approach to load paths selection provides some confidence on the accuracy of the constructed reduced model (we do not leave unexplored regions in the space of potential mechanical states), but it is significantly more expensive than the random generation approach: we must pay for reliability, in particular in the context of complex, nonlinear problems. This overhead may decrease when considering more complex material behaviours (visco-plasticity, anisotropy), owing to the fact that the random generator may require more time to detect small outliers in the space of potential mechanical states. 

More details about this research can be found in the Ph.D. thesis of Dr Olivier Goury.

Goury, O., Amsallem, D., Bordas, S.P.A. et al. Comput Mech (2016) 58: 213. https://doi.org/10.1007/s00466-016-1290-2

Wednesday, 13 March 2013

Adaptive multiscale methods for multiple intergranular discontinuities


In order to simulate fracture in heterogeneous materials, one of the most promising approaches is to model the behaviour of the material at the scale of the material heterogeneities, which is usually called micro or meso-modelling. In a second step, these fine-scale features can be transferred to the scale of the structure by averaging techniques (on representative volume elements or unit cells) or homogenisation. However, in the case of fracture, these upscaling methods cannot be used in the vicinity of cracks, as the separation of scales necessary for their application is lost.
In the literature, two schools of thought aim at alleviating this problem. The first one tries to extend the applicability of averaging techniques to fracture (in the casa of established damage bands). The second one aims at analysing the zones where homogenisation fails directly at the microscopic scale, in a concurrent framework. Although the latter approach is more general, it is heavier in terms of computations, and requires the development of robust adaptivity procedures, which is the topic of this project.

From the PhD thesis of Ahmad Akbari R.
We propose to capture the initiation of the damage mechanisms at the macroscale using a classical FE2 approach. In order to control the precision of the simulations, an error estimation for the upscaling strategy is carried out at each step of the time integration algorithm. Based on this estimation, the macro elements are refined hierarchically where needed. When the size of a macro-element becomes of the order of the statistical volume ele- ment used in the FE2 method, the homogenisation step is bypassed. Instead, the corresponding process zone is modelled directly at the microscale and coupled to the homogenised region by a mortar-type gluing technique.


Simulations by Ahmad Akbari, P. Kerfriden and S.P.-A. Bordas


A Akbari Rahimabadi, P Kerfriden, S Bordas, Scale selection in nonlinear fracture mechanics of heterogeneous materials, Philosophical Magazine 95 (28-30), 3328-3347, 2015

Monday, 11 January 2010

Error controlled time-stepping procedure for nonlinear fracture


 

A very simple procedure to adapt "ont-the-fly" the time-stepping schemes in the simulation of delamination [Allix et al. 2010]. A statically admissible solution is reconstructed by interpolation between two sucessive time solutions. A measure of the non-verification of the constitutive law of this reconstructed solution over each time interval provides a valid error estimate for the time discretisation scheme. It is used to adapt the load steps (value of the arc-length parameter in this particular case).




Monday, 5 October 2009

Simulation of delamination in a bolted joint

An application of the multiscale domain decomposition method to joints in laminate. The scientific bottleneck in this work was the cost of solving the coarse-grid problem of the domain decomposition approach. We used the ideas developed by Dr. Gosselet and Prof. Rey [Gosselet and Rey 2003] on the acceleration of Krylov solvers to fasten the solution process.


Section of the deformed bolted joint when loaded in traction.


One of the practical lessons of this work was that the whole design process needs to be scalable on a parallel architecture. An efficient parallel solver is virtually useless if the pre and post-processing steps cannot handle large distributed data.

A nice video for the simulation of a smaller joint.


Simulations by Pierre Kerfriden

Monday, 9 February 2009

Multiscale simulation of delamination in composite laminates

This work makes use of the cohesive zone models developed in LMT Cachan by Prof. Allix. [Allix et al. 1992] to simulate inter-laminate crack propagation. The LaTIn domain [Ladeveze et al. 2002] decomposition solver permits to handle the nonlinear interface constitutive law efficiently. In a nutshell, the cohesive behaviour is lumped into the interfaces of the domain decomposition method. The resulting problems for each substructure are linear and can be factorised once for all at the start of the simulation. However, this process raises difficulties. The convergence rate of the iterative algorithm of the LaTIn domain decomposition approach is affected by the damage state of the cohesive interfaces. One of the major contribution of the work was to adapt this algorithm to the local nonlinearities, which enabled to retrieve the expected numerical efficiency [Kerfriden et al. 2009].



The very rich coarse scale problem of the LaTin domain decomposition method limits the scalability of the method. A Schur-based domain decomposition solver is developed to overcome this limit [Kerfriden et al. 2009].