2019
DOI: 10.1016/j.cma.2018.11.026
|View full text |Cite
|
Sign up to set email alerts
|

Meta-modeling game for deriving theory-consistent, microstructure-based traction–separation laws via deep reinforcement learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
73
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 115 publications
(73 citation statements)
references
References 68 publications
0
73
0
Order By: Relevance
“…For instance, the mapping of variables in the generalized plasticity framework can be obtained by training a recurrent neural network that represents the path-dependent constitutive relation between the history of input vertices of σ ivr n (p, q, θ) and ξ piv n (¯ p ,¯ p v ,¯ p s , e) and the output vertices of n load n , m f low n and H n . The details of training data preparation, network design, training and testing are specified in the previous work on the meta-modeling framework for traction-separation models with data of microstructural features [Wang and Sun, 2019a]. In this framework, all neural network edges are generated using the same neural network architecture, i.e., two hidden layers of 64 GRU(Gated recurrent unit) neurons in each layer, and the output layer as a dense layer with linear activation function.…”
Section: Game Choice Alternatives: Training Neural Network Edgesmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, the mapping of variables in the generalized plasticity framework can be obtained by training a recurrent neural network that represents the path-dependent constitutive relation between the history of input vertices of σ ivr n (p, q, θ) and ξ piv n (¯ p ,¯ p v ,¯ p s , e) and the output vertices of n load n , m f low n and H n . The details of training data preparation, network design, training and testing are specified in the previous work on the meta-modeling framework for traction-separation models with data of microstructural features [Wang and Sun, 2019a]. In this framework, all neural network edges are generated using the same neural network architecture, i.e., two hidden layers of 64 GRU(Gated recurrent unit) neurons in each layer, and the output layer as a dense layer with linear activation function.…”
Section: Game Choice Alternatives: Training Neural Network Edgesmentioning
confidence: 99%
“…The pseudocode of the reinforcement learning algorithm to play the two-player meta-modeling game is presented in Algorithm 2. This is an extension of the algorithm in [Wang and Sun, 2019a]. As demonstrated in Algorithm 2, each complete DRL procedure involves numIters number of training iterations and one final iteration for generating the final converged digraph model.…”
mentioning
confidence: 99%
“…The gradient energy term is defined via a second order tensor κ ψ grad (∇η) = 1 2 ∇η · κ∇η (34) Table 3: Components of the deformation gradient tensor, representing the eigenstrain in the Mg-Y β precipitate [29]. c Y = 0.125 F β 11 1.0307 F β 22 1.0196 F β 33 0.9998 and is anisotropic if κ is an anisotropic tensor. The components of κ are related to the barrier height ω, the interface thickness, and the anisotropic interfacial energies based on the equilibrium solution for the one-dimensional problem and neglecting elasticity.…”
Section: First-order Dynamics Of a Phase Transforming Binary Alloy Symentioning
confidence: 99%
“…Recently, graphs have been used to represent problem components such as governing equations, constitutive relations and initial/boundary conditions in the numerical framework of IBVPs [10]. Graph vertices and edges also have been used to represent variables and relations between them, respectively, in a game theoretic approach to discovering constitutive response functions for material failure [11].The following sections consider computations of IBVPs for stationary and steady-state systems (Section 2), non-dissipative dynamics (Section 3) and dissipative dynamics (Section 4), and connect them to specific types of graphs. The standard machinery of graph theoretic definitions and results is invoked for this purpose.…”
mentioning
confidence: 99%
“…The simple usual way to do so is to estimate an Elo/GOR score for each network 4 . The idea which defines this number is that if s 1 and s 2 are the scores of two nets, then the probability that the first one wins against the second one in a single match is 1 1 + e (s2−s1)/c so that s 1 − s 2 is, apart from a scaling coefficient c (traditionally set to 400), the log-odds-ratio of winning.…”
Section: E Measuring Playing Strengthmentioning
confidence: 99%