Showing posts with label Spotlight. Show all posts
Showing posts with label Spotlight. Show all posts

Monday, 10 January 2022

Multiscale stress surrogates via probabilistic graph-based geometric deep learning

https://arxiv.org/abs/2205.06562


 

Fast stress predictions in multiscale materials & structures graph-based probabilistic geometric deep learning with online physics-based corrections

https://618.euromech.org/slides/

Sunday, 8 July 2018

CutFEM method to simulate composite fracture



Phase-field in the bulk, zero-thickness cohesive elements with friction contact at the matrix/inclusions interface

Sunday, 18 March 2018

Laser ablation: micro-cavity

We have developed a cut finite element method for one-phase Stefan problems, with applications in laser manufacturing. The geometry of the workpiece is represented implicitly via a level set function. Material above the melting/vaporisation temperature is represented by a fictitious gas phase. The moving interface between the workpiece and the fictitious gas phase may cut arbitrarily through the elements of the finite element mesh, which remains fixed throughout the simulation, thereby circumventing the need for cumbersome remeshing operations. The primal/dual formulation of the linear one-phase Stefan problem is recast into a primal non-linear formulation using a Nitsche-type approach, which avoids the difficulty of constructing inf-sup stable primal/dual pairs. Through the careful derivation of stabilisation terms, we show that the proposed Stefan-Signorini-Nitsche CutFEM method remains stable independently of the cut location. In addition, we obtain optimal convergence with respect to space and time refinement. Several 2D and 3D examples are proposed, highlighting the robustness and flexibility of the algorithm, together with its relevance to the field of micro-manufacturing.


 

Simulations by S. Claus, S. Bigot and P. Kerfriden in the FEniCS library CutFEM.

S. Claus, S. Bigot and P. Kerfriden,
CutFEM Method for Stefan--Signorini Problems with Application in Pulsed Laser Ablation, SIAM J. Sci. Comput., 40(5), 2018

Funding: Sêr Cymru National Research Network

Monday, 30 January 2017

Unilateral Contact simulations with a stable LaTIn solver for non-conforming "Cut" finite elements

We have recently developed an unfitted finite element solver for sets of solids that are interacting through unilateral contact. The analysis mesh is regular and the geometry of the solids is allowed to cut through the elements. The two pictures below illustrate the capabilities of our solver


The solver itself combines the best of two world. On the one hand, we use elements of the CutFEM technology pioneered by P. Hansbo, E. Burman, MG Larson in order to allow interfaces to cut through the mesh without altering the convergence rate associated with finite element solvers. On the other hand, different phases of the composite are coupled using the LaTIn solver, first proposed by P. Ladevèze, and whose versatility has been proven over the years. We modified the discrete mixed formulation associated with LaTIn in order to stabilise the interface solution, ensuring that the condition number of  successive linear systems of equations is controlled an that the convergence with mesh refinement is optimum.  More detailed about our general strategy can be found in the wide audience paper that we have written for NAFEMS magazine Benchmark, available in draft form here.



Some more pictures of the geometry of the 3D woven composite material. The fibres are easily described by using a set of appropriate level set functions. The matrix bloc can be meshed "by hand" as it is not requires to conform to the complex geometry of the interfaces between phases of the composite.

The simulations were performed using the CutFEM FEniCS library.

Susanne Claus, Pierre Kerfriden, A stable and optimally convergent LaTIn-Cut Finite Element Method for multiple unilateral contact problems, IJNME, 2016

Tuesday, 14 April 2015

Multiscale approximation of stochastic PDEs with fastly varying coefficients: certification of accuracy through error bounding

The goal of this research is to certify the accuracy of homogenisation schemes in the sense of engineering quantities of interest.

Homogenisation has been used for centuries to upscale some knowledge of the physics of heterogeneous materials to the engineering scale, where predictions are required. Homogenisation is typically a limit result, that delivers good predictions when the typical length-scale of the material heterogeneities is small compared to the engineering scale (i.e. the scales are well-separated). Broadly speaking, homogenisation fails in boundary regions that are dominated by stress concentrations (around sharp joints, holes, at the interface between different materials, ...).

We investigate here a methodology to quantify the error that is made when using homogenisation when scale separability is not satisfied. We started from the modelling error methodology developed at ICES Texas in the early 2000's. The approach proposed by this group is to bound the error that is made on engineering quantities of interest (QoI) when using an arbitrary homogenisation scheme as a approximation of the intractable, fine-scale heterogeneous problem. This was done by extending the equilibrated residual method (classically used to quantify discretisation errors) to the context of modelling error and combining it the adjoint methodology to convert error measures in energy norm into errors in QoI. The method was shown to deliver guaranteed error bounds, without requiring to solve the the underlying heterogeneous problem. However, the heterogeneous problem needs to be constructed (but not solved) in order to compute the bounds, which, in the case of large composite structures, remains a computational bottleneck.




We tackle this issue by representing the microscale heterogeneities as a random field. In addition to the fact that this is a realistic modelling approach, in the sense that we rarely know where the heterogeneities of composite materials are precisely located, we are able to completely alleviate the need for meshing and assembling the fine-scale heterogeneous problem. We therefore retrieve a numerical separation of scales for the computation of modelling error bounds.


We showed that this methodology could be applied to provide bounds for the stochastic homogenisation error made on both the first and second statistical moments of engineering QoI. These bounds can be implemented within a couple of hours in any finite element code. They can be interpreted as an extension of the classical Reuss-Voigt bounds, but without any a priori requirement in terms of scale separability.

All the numerical results have been obtained by Daniel Alves Paladim in the context of his PhD thesis. We have largely benefitted from the advices of Mathilde Chevreuil, University of Nantes, regarding the stochastic aspects of this work.

Monday, 12 January 2015

Bayesian optimisation for the selection of representative load paths in computational material testing (best UK Ph.D. in Comp. Mech. for Dr Olivier Goury)

The aim of computational material testing is to obtain the relationship between forces and extensions of a complex material through simulations, given some information about its microstructure. This is conceptually similar to pulling on a specimen experimentally, and reporting the force that needs to be applied to obtain an overall deformation of the material. Of course, replacing this costly experimental setup by simulations allows practitioners to investigate the use of new materials before manufacturing them, to play with the microstructure (e.g. varying the fiber content in advanced fiber-reinforced concrete) in order to design a material that fits particular engineering needs, or to test and control the reaction of the material when used for the design of a complex, multiscale engineering system.

However, computational material testing (or computational homogenisation), is very demanding in terms of computational resources. Typically, the material model needs to be solved at every material point of the engineering system of interest, which can be arbitrarily large. 

The solution that we have been investigating to alleviate this issue is to develop efficient Model order Reduction technique. In an offline stage, the material is tested using pre-defined loading scenarios, which leads to a set of particular mechanical states (the snapshot). Then, when a response of the material is required by the engineer for design purposes (i.e. online), it is obtained by performing an optimal interpolation in the space spanned by these pre-computed states, which reduces the overall computational cost by orders of magnitude.
Figure 1: Model order reduction for computational material testing 
The choice of the pre-defined load scenarios is key to the success of this approach. All the states that will need to be predicted should be sufficiently well represented by the states contained in the snapshot. At the present stage of our research, we consider that all potential load cases are equally likely. Although very general, this framework leads to an immense space of mechanical states to explore and represent.

Figure 2: random load-path generation, in 2D.
In 3D the paths belong to an hypercube of dimension 6
One first approach to explore this space of likely states is to choose random load paths (see Figure 2). In the case of damage mechanics, we constrain the random generation to follows states of strictly increasing energy dissipation, thereby constraining the generator to explore non-trivial (i.e. damaged) mechanical states. The reduced order models obtained by applying this simple idea are surprisingly efficient and reliable in our numerical test cases (simple elastic damageable multiphase representative volume element).

Our second approach is more advanced and aims at providing a quasi-optimal family of load-cases. The idea is to locate the load case that is the most incorrectly predicted when using previously generated mechanical states, and iterate until the accuracy of the reduced order model is acceptable. This data-driven approach is very appealing but requires a measure of goodness of prediction over all potential load-cases, which is difficult to (i) define and (ii) obtain at reasonable numerical costs. We have tackled the affordability issue by adopting:
  1. A hierarchical description of the load paths using adaptive shape functions in time: start with proportional loading, then introduce an increasing number of kinks at arbitrary location on the load path (see figure 3)
  2. Figure 3: Automatic path-generation through probabilistic worst-case
    scenario detection (left: proportional loading, right:
    space of load-paths containing a unique kink).
  3. A Bayesian optimisation algorithm to detect the case of worst prediction for a given approximate description of potential load-cases. Precisely, we compute the residual of the governing equations at points chosen quasi-optimally using Gaussian process interpolation and the associated notion of maximum probability of improvement. Subsequently, we establish a relationship between a measure of this residual and the goodness of prediction, through a probabilistic regression technique.
This new approach to load paths selection provides some confidence on the accuracy of the constructed reduced model (we do not leave unexplored regions in the space of potential mechanical states), but it is significantly more expensive than the random generation approach: we must pay for reliability, in particular in the context of complex, nonlinear problems. This overhead may decrease when considering more complex material behaviours (visco-plasticity, anisotropy), owing to the fact that the random generator may require more time to detect small outliers in the space of potential mechanical states. 

More details about this research can be found in the Ph.D. thesis of Dr Olivier Goury.

Goury, O., Amsallem, D., Bordas, S.P.A. et al. Comput Mech (2016) 58: 213. https://doi.org/10.1007/s00466-016-1290-2

Wednesday, 13 March 2013

Adaptive multiscale methods for multiple intergranular discontinuities


In order to simulate fracture in heterogeneous materials, one of the most promising approaches is to model the behaviour of the material at the scale of the material heterogeneities, which is usually called micro or meso-modelling. In a second step, these fine-scale features can be transferred to the scale of the structure by averaging techniques (on representative volume elements or unit cells) or homogenisation. However, in the case of fracture, these upscaling methods cannot be used in the vicinity of cracks, as the separation of scales necessary for their application is lost.
In the literature, two schools of thought aim at alleviating this problem. The first one tries to extend the applicability of averaging techniques to fracture (in the casa of established damage bands). The second one aims at analysing the zones where homogenisation fails directly at the microscopic scale, in a concurrent framework. Although the latter approach is more general, it is heavier in terms of computations, and requires the development of robust adaptivity procedures, which is the topic of this project.

From the PhD thesis of Ahmad Akbari R.
We propose to capture the initiation of the damage mechanisms at the macroscale using a classical FE2 approach. In order to control the precision of the simulations, an error estimation for the upscaling strategy is carried out at each step of the time integration algorithm. Based on this estimation, the macro elements are refined hierarchically where needed. When the size of a macro-element becomes of the order of the statistical volume ele- ment used in the FE2 method, the homogenisation step is bypassed. Instead, the corresponding process zone is modelled directly at the microscale and coupled to the homogenised region by a mortar-type gluing technique.


Simulations by Ahmad Akbari, P. Kerfriden and S.P.-A. Bordas


A Akbari Rahimabadi, P Kerfriden, S Bordas, Scale selection in nonlinear fracture mechanics of heterogeneous materials, Philosophical Magazine 95 (28-30), 3328-3347, 2015

Monday, 7 June 2010

"On-the-fly" Reduced Order Modelling approach for the simulation of damage propagation in heterogeneous media

This research establishes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. 




Monday, 9 February 2009

Multiscale simulation of delamination in composite laminates

This work makes use of the cohesive zone models developed in LMT Cachan by Prof. Allix. [Allix et al. 1992] to simulate inter-laminate crack propagation. The LaTIn domain [Ladeveze et al. 2002] decomposition solver permits to handle the nonlinear interface constitutive law efficiently. In a nutshell, the cohesive behaviour is lumped into the interfaces of the domain decomposition method. The resulting problems for each substructure are linear and can be factorised once for all at the start of the simulation. However, this process raises difficulties. The convergence rate of the iterative algorithm of the LaTIn domain decomposition approach is affected by the damage state of the cohesive interfaces. One of the major contribution of the work was to adapt this algorithm to the local nonlinearities, which enabled to retrieve the expected numerical efficiency [Kerfriden et al. 2009].



The very rich coarse scale problem of the LaTin domain decomposition method limits the scalability of the method. A Schur-based domain decomposition solver is developed to overcome this limit [Kerfriden et al. 2009].