“…For instance, the mapping of variables in the generalized plasticity framework can be obtained by training a recurrent neural network that represents the path-dependent constitutive relation between the history of input vertices of σ ivr n (p, q, θ) and ξ piv n (¯ p ,¯ p v ,¯ p s , e) and the output vertices of n load n , m f low n and H n . The details of training data preparation, network design, training and testing are specified in the previous work on the meta-modeling framework for traction-separation models with data of microstructural features [Wang and Sun, 2019a]. In this framework, all neural network edges are generated using the same neural network architecture, i.e., two hidden layers of 64 GRU(Gated recurrent unit) neurons in each layer, and the output layer as a dense layer with linear activation function.…”