2024
DOI: 10.1002/nme.7481
|View full text |Cite
|
Sign up to set email alerts
|

On sparse regression, Lp‐regularization, and automated model discovery

Jeremy A. McCulloch,
Skyler R. St. Pierre,
Kevin Linka
et al.

Abstract: Sparse regression and feature extraction are the cornerstones of knowledge discovery from massive data. Their goal is to discover interpretable and predictive models that provide simple relationships among scientific variables. While the statistical tools for model discovery are well established in the context of linear regression, their generalization to nonlinear regression in material modeling is highly problem‐specific and insufficiently understood. Here we explore the potential of neural networks for auto… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 16 publications
(1 citation statement)
references
References 77 publications
0
1
0
Order By: Relevance
“…During training, our constitutive neural network learns the network weights 𝒘 = {𝑤 𝐼,1 , ..., 𝑤 𝐼,3𝑝+2 , 𝑤 𝐼𝐼,1 , ..., 𝑤 𝐼𝐼,𝑝 }, by minimizing a loss function 𝐿 [22]. This loss function is the mean-squared error, or the 𝐿 2 -norm of the difference between the model, ċ(𝒄 img ), and the training data, ̂̇𝒄(𝒄 img ), across all regions divided by the number of training points 𝑛 train ,…”
Section: Constitutive Neural Networkmentioning
confidence: 99%
“…During training, our constitutive neural network learns the network weights 𝒘 = {𝑤 𝐼,1 , ..., 𝑤 𝐼,3𝑝+2 , 𝑤 𝐼𝐼,1 , ..., 𝑤 𝐼𝐼,𝑝 }, by minimizing a loss function 𝐿 [22]. This loss function is the mean-squared error, or the 𝐿 2 -norm of the difference between the model, ċ(𝒄 img ), and the training data, ̂̇𝒄(𝒄 img ), across all regions divided by the number of training points 𝑛 train ,…”
Section: Constitutive Neural Networkmentioning
confidence: 99%