2020
DOI: 10.48550/arxiv.2004.00265
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Constitutive Relations using Symmetric Positive Definite Neural Networks

Kailai Xu,
Daniel Z. Huang,
Eric Darve

Abstract: We present the Cholesky-factored symmetric positive definite neural network (SPD-NN) for modeling constitutive relations in dynamical equations. Instead of directly predicting the stress, the SPD-NN trains a neural network to predict the Cholesky factor of a tangent stiffness matrix, based on which the stress is calculated in the incremental form. As a result of the special structure, SPD-NN weakly imposes convexity on the strain energy function, satisfies time consistency for path-dependent materials, and the… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(18 citation statements)
references
References 33 publications
0
18
0
Order By: Relevance
“…∂ n+1 = H 0, and The symmetric positive definiteness of the tangent stiffness matrix H ensures that the strain energy is weak convex. The weak convexity is beneficial for stabilizing both training and predictive modeling [41]. Equivalently, Equation ( 20) can be written in the following form…”
Section: Neural-network-based Viscous Constitutive Relationsmentioning
confidence: 99%
“…∂ n+1 = H 0, and The symmetric positive definiteness of the tangent stiffness matrix H ensures that the strain energy is weak convex. The weak convexity is beneficial for stabilizing both training and predictive modeling [41]. Equivalently, Equation ( 20) can be written in the following form…”
Section: Neural-network-based Viscous Constitutive Relationsmentioning
confidence: 99%
“…ANN models are widely used in approximating complex mappings between input and output as an universal approximation theorem [17], which allows to approximate any functions in a form-free manner. In addition to the universal function approximation, the ANN model also features several other advantages in constructing a constitutive model [18] such as better performance for unevenly distributed data, approximating a non-smooth function, and mapping a high dimensional input-output. The early work of ANN-based constitutive models were carried out by Ghaboussi and his co-workers [19][20][21][22][23][24].…”
Section: B Applicationsmentioning
confidence: 99%
“…In other words, the input of the ANN model and the ANN model itself need to be determined simultaneously during the training [28,42]. However, this approach brings another benefit that the physical constraints are implicitly imposed into the training process as the ANN model must go through the FE model to generate correct input data [18,30]. Some examples of using indirectly measurable data for training ANN models are given in Table 1.…”
Section: Coupling Ann With Mechanical Modelsmentioning
confidence: 99%
“…Recently, there have been attempts to rectify the limitations of machine learning models that do not distinguish or partition the elastic and plastic strain. Xu et al (2020), for instance, introduce a differentiable transition function to create a smooth transition between the elastic and plastic range for an incremental constitutive law generated from supervised learning. Mozaffar et al (2019) and later Zhang and Mohr (2020) inroduce the machine learning to deduce the yield function and enable linear and distortion hardening using loss functions that minimizing the L 2 norm of the yield function discrepancy.…”
Section: Why Sobolev Training For Plasticitymentioning
confidence: 99%