The second UQSay seminar, organized by L2S and MSSMAT, will take place on Thursday afternoon, April 18, 2019, at CentraleSupelec Paris-Saclay (Eiffel building, amphi V, next to the one where we had UQSay #01).
We will have two talks, and hopefully some coffee in between:
14h — Chu Mai
(EDF R&D / MMC dept) — [slides]
Prediction of crack propagation kinetics through
multipoint stochastic simulations of microscopic fields
Prediction of crack propagation kinetics in the components of nuclear plant primary circuits undergoing Stress Corrosion Cracking (SCC) can be improved by a refinement of the SCC models. One of the steps in the estimation of the time to rupture is the crack propagation criterion. Current models make use of macroscopic measures (e.g. stress, strain,..) obtained for instance using the Finite Element Method. To go down to the microscopic scale and use local measures, a two steps approach is proposed. First, synthetic microstructures representing the material under specific loadings are simulated, and their quality is validated using statistical measures. Second, the shortest path to rupture in terms of propagation time is computed, and the distribution of those synthetic times to rupture is compared with the time to rupture estimated only from macroscopic values. The first step is realized with the Cross Correlation Simulation (CCSIM), a multipoint simulation algorithm that produces synthetic stochastic fields from a training field. The Earth Mover's Distance is the metric which allows to assess the quality of the realizations. The computation of shortest paths is realized using Dijkstra's algorithm. This approach allows to obtain a refinement in the prediction of the kinetics of crack propagation compared to the macroscopic approach. An influence of the loading conditions on the distribution of the computed synthetic times to rupture was observed, which could be reduced through a more robust use of the CCSIM.
15h — Olivier Le Maître
(LIMSI) — [slides]
Surrogate models and reduction methods for UQ
and inference in large-scale models
Uncertainty Quantification (UQ) and Global Sensitivity Analysis (GSA) in numerical models often rely on sampling approaches (either random or deterministic) that call for many resolutions of the model. Even though these computations can usually be carried out in parallel, the application of UQ and GSA methods to large-scale simulations remains challenging, both from the computational, storage and memory points of view. Similarly, Bayesian inference and assimilation problems can be favorably impacted by over-abundant observations, because of over-constrained update problems or numerical issues (overflows, complexity,...), raising the question of observations reduction.
A solution to alleviate the computational burden is to use a surrogate model of the full large scale model, that can be sampled extensively to estimate sensitivity coefficients and characterize the prediction uncertainty. However, building a surrogate for the whole large scale model solution can be extremely demanding and reduction strategies are needed. In this talk, I will introduce several techniques for the reduction of the model output and the construction of its surrogate. Some of these techniques will be illustrated on ocean circulation model simulations. For the reduction of observations, I will discuss and compare few strategies based on information theoretical considerations that have been recently proposed for the Bayesian framework.
Refs: , , , , , .
- Julien Bect (L2S),
- Fernando Lopez Caballero (MSSMAT),
- and Didier Clouteau (MSSMAT).
No registration is needed, but an email would be appreciated if you intend to come.