Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2019
DOI: 10.1145/3357384.3358132
|View full text |Cite
|
Sign up to set email alerts
|

DSANet

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 172 publications
(23 citation statements)
references
References 5 publications
0
23
0
Order By: Relevance
“…DSANet [38]: A multivariate time series prediction model that uses CNN to capture the temporal correlations of a time series and captures the spatial correlations using a self-attention mechanism.…”
Section: Benchmarking Methodsmentioning
confidence: 99%
“…DSANet [38]: A multivariate time series prediction model that uses CNN to capture the temporal correlations of a time series and captures the spatial correlations using a self-attention mechanism.…”
Section: Benchmarking Methodsmentioning
confidence: 99%
“…For TPA-LSTM and DSANet, we followed the parameter settings as reported in the TPA paper 29 and the DSANet paper. 30 For the training of the LGEANet, we first generate a pre-trained network optimized using tumor trajectories of different patients. And for each trace in the evaluation dataset, the first 50 min were used as the training period for transfer learning to build the patient-specific prediction model.…”
Section: Experimental Settings and Implementation Detailsmentioning
confidence: 99%
“…DSANet 30 utilizes the strong feature-extraction capability of self -attentional networks to learn its relationship with other learned representations including itself for each learned representation of a univariate series. A dual-stage attention-based Conv-LSTM network 31 was proposed to learn the spatio-temporal correlation of multivariate time series.…”
Section: Introductionmentioning
confidence: 99%
“…Autoregressive model ensembling.Due to the nonlinearity of convolutional, multi-head self-attention, and GRU modules, the output is not sensitive to the original input [23]. To address this drawback, we apply a first-order autoregressive (AR) model to the preprocessed data to obtain the second predicted value.…”
Section: Modules Of Cgnn-mhsa-armentioning
confidence: 99%