2022
DOI: 10.48550/arxiv.2201.01032
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Operators with Coupled Attention

Abstract: Supervised operator learning is an emerging machine learning paradigm with applications to modeling the evolution of spatio-temporal dynamical systems and approximating general black-box relationships between functional data. We propose a novel operator learning method, LOCA (Learning Operators with Coupled Attention), motivated from the recent success of the attention mechanism. In our architecture, the input functions are mapped to a finite set of features which are then averaged with attention weights that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 38 publications
1
3
0
Order By: Relevance
“…When comparing the prediction accuracy from different models, similar to the previous examples, the FNO suffers from overfitting and vanishing gradient issues when L > 2, especially in the original (more noisy) dataset. This finding is consistent with the results reported in [68,103], where the performance of the FNOs was found to be deteriorated on noisy datasets. In contrast, the accuracy of the IFNOs monotonically improves with the increase of L. Both neural operator models outperforms the conventional constitutive modeling approaches by around one order of magnitude.…”
Section: Resultssupporting
confidence: 92%
“…When comparing the prediction accuracy from different models, similar to the previous examples, the FNO suffers from overfitting and vanishing gradient issues when L > 2, especially in the original (more noisy) dataset. This finding is consistent with the results reported in [68,103], where the performance of the FNOs was found to be deteriorated on noisy datasets. In contrast, the accuracy of the IFNOs monotonically improves with the increase of L. Both neural operator models outperforms the conventional constitutive modeling approaches by around one order of magnitude.…”
Section: Resultssupporting
confidence: 92%
“…In the regularized cavity flow problem presented in [17], an additional trigonometric input function was included in the branch network to incorporate periodic boundary conditions. Similarly, an additional input function is included in [20], which is created by averaging a feature embedded in the inputs of the branch network over probability distributions that depend on the corresponding query locations of the output function, employing the kernelcoupled attention mechanism, which allows the operator to accurately model correlations between the query locations of the output functions.…”
Section: Feature Expansion In Deeponetmentioning
confidence: 99%
“…Guo et al [29] introduces attention as an instance-based learnable kernel for direct sampling method and demonstrates superiority on boundary value inverse problems. Learning operators with coupled attention [32] uses attention weights to learn correlations in the output domain and enables sample-efficient training of the model. General neural operator transformer for operator learning [25] proposes a heterogeneous attention architecture that stacks multiple cross-attention layers and uses a geometric gating mechanism to adaptively aggregate features from query points.…”
Section: Introductionmentioning
confidence: 99%