2022
DOI: 10.48550/arxiv.2201.09113
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Predicting Physics in Mesh-reduced Space with Temporal Attention

Abstract: Graph-based next-step prediction models have recently been very successful in modeling complex high-dimensional physical systems on irregular meshes. However, due to their short temporal attention span, these models suffer from error accumulation and drift. In this paper, we propose a new method that captures long-term dependencies through a transformer-style temporal attention model. We introduce an encoder-decoder structure to summarize features and create a compact mesh representation of the system state, t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 32 publications
0
17
0
Order By: Relevance
“…Looking ahead, the potential applications of GNS in industrial and research settings are extensive, especially in fields requiring rapid and accurate simulations of granular material behavior. Future work could focus on enhancing the GNS model training efficiency, perhaps by exploring more advanced machine learning techniques or hybrid models to handle time sequence information on the trajectories more efficiently. Further research could also delve into expanding the model applicability to three-dimensional simulations and more complex material interactions associated with polydisperse particle systems. Additionally, the model could be enhanced by incorporating a broader range of material properties like particle packing fraction, nonspherical particle shapes, and complex domain boundaries …”
Section: Discussionmentioning
confidence: 99%
“…Looking ahead, the potential applications of GNS in industrial and research settings are extensive, especially in fields requiring rapid and accurate simulations of granular material behavior. Future work could focus on enhancing the GNS model training efficiency, perhaps by exploring more advanced machine learning techniques or hybrid models to handle time sequence information on the trajectories more efficiently. Further research could also delve into expanding the model applicability to three-dimensional simulations and more complex material interactions associated with polydisperse particle systems. Additionally, the model could be enhanced by incorporating a broader range of material properties like particle packing fraction, nonspherical particle shapes, and complex domain boundaries …”
Section: Discussionmentioning
confidence: 99%
“…The following properties are desirable for an adequate interpolation [35]: (1) interpolated values at the source nodes should match the original data; (2) integrated resultants should be conserved; and (3) interpolated fields should be continuous. Consequently, directly recasting the data across coincident points and nearest-neighbour interpolation, as Han et al [20] proposed, result inappropriate because conservation and continuity properties are not satisfied.…”
Section: Weighted Moving Least Squares For Grid Interpolationmentioning
confidence: 99%
“…Baqué et al [18] projected the 3D geometry to a simpler prismatic graph to contain the computational burden. On the other hand, comprehensive message-passing methods, as adopted by Hines and Bekemeyer [19] or Han et al [20], which feature dedicated learnable weights for each edge and node of the mesh, can lead to excessive memory requirements.…”
Section: Introductionmentioning
confidence: 99%
“…The Fourier Neural Operator (FNO) (Li et al, 2020) and its subsequent iterations (Li et al, 2021;Guibas et al, 2021;Tran et al, 2021) have demonstrated their efficacy in a variety of contexts. Building on the momentum of attention mechanisms (Vaswani et al, 2017), numerous studies (Geneva & Zabaras, 2022;Kissas et al, 2022;Han et al, 2022) have utilized attention to model and simulate physical phenomena. The Galerkin Transformer (Cao, 2021), for instance, employs attention layers to discern the underlying structures and patterns within the spatial domain of PDEs.…”
Section: Related Workmentioning
confidence: 99%