2023
DOI: 10.1016/j.eswa.2023.119619
|View full text |Cite
|
Sign up to set email alerts
|

SAITS: Self-attention-based imputation for time series

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 109 publications
(23 citation statements)
references
References 13 publications
0
23
0
Order By: Relevance
“…We motivate the choice of DMSA against the original mechanism in the ablation study conducted in Section 4.4. Note that this type of diagonal attention mask has already proven better performance against the original one in Vision Transformer [31] to improve the overall performance of the model when dealing with small-size dataset, and more recently for time series imputation [18].…”
Section: Embedding Blockmentioning
confidence: 99%
“…We motivate the choice of DMSA against the original mechanism in the ablation study conducted in Section 4.4. Note that this type of diagonal attention mask has already proven better performance against the original one in Vision Transformer [31] to improve the overall performance of the model when dealing with small-size dataset, and more recently for time series imputation [18].…”
Section: Embedding Blockmentioning
confidence: 99%
“…This problem is alleviated by the LSTM Gers et al [15] which incorporates long-term stable memory over time using a series of gating functions. LSTM has been widely used, achieving state-ofthe-art results in numerous sequence learning applications, such as such as COVID-19 detection Hassan et al [16]; Li et al [17], video sequence processing Kong et al [18]; Donahue et al [19], cancer metastasis detection Kong et al [20], and time series analysis Du et al [21]. However, the traditional LSTM is not suitable for image sequence analysis since it uses fully-connected structure during both the input-to-state and state-to-state transitions, neglecting the spatial information.…”
Section: Related Workmentioning
confidence: 99%
“…The paper (Yunsi, Lahcen and Azzouz Mohamed, 2023) explores the application of Transformer neural networks to predict cryptocurrency prices. The paper (Du, Côté and Liu, 2023) proposes a novel method based on the self-attention mechanism containing of two diagonally-masked self-attention blocks that learn missing values from a weighted combination of temporal and feature dependencies.…”
Section: Researchers Have Proposed Adaptations Like the Temporal Fusi...mentioning
confidence: 99%