Proceedings of the 30th ACM International Conference on Information &Amp; Knowledge Management 2021
DOI: 10.1145/3459637.3482136
|View full text |Cite
|
Sign up to set email alerts
|

Locker: Locally Constrained Self-Attentive Sequential Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 46 publications
(25 citation statements)
references
References 12 publications
0
25
0
Order By: Relevance
“…Additionally, we examine the inference efficiency for varying input lengths, see Figure 4. We observe that when sequences are shorter than 2 12 , ELTransformer performs on average similarly to CNN models (O(l) computational complexity) and outperforms BERT4NILM. For longer sequences, the inference time of BERT4NILM starts growing quadratically, while ELTransformer remains efficient, suggesting that ELTransformer has similar efficiency to CNN while being much more scalable with only 1.91M parameters.…”
Section: Implementation 1) Preprocessingmentioning
confidence: 85%
See 3 more Smart Citations
“…Additionally, we examine the inference efficiency for varying input lengths, see Figure 4. We observe that when sequences are shorter than 2 12 , ELTransformer performs on average similarly to CNN models (O(l) computational complexity) and outperforms BERT4NILM. For longer sequences, the inference time of BERT4NILM starts growing quadratically, while ELTransformer remains efficient, suggesting that ELTransformer has similar efficiency to CNN while being much more scalable with only 1.91M parameters.…”
Section: Implementation 1) Preprocessingmentioning
confidence: 85%
“…For the seq2point setting in this paper, another drawback of the transformer model is the lack of localness modeling. Although self-attention is designed to capture longterm semantics from input sequences, they often fail to capture short-term and local signal patterns [12]. In energy disaggregation, lacking local dependency can lead to mismatches and performance drops for multi-status appliances [11].…”
Section: Transformer Models In Nilmmentioning
confidence: 99%
See 2 more Smart Citations
“…SSE-PT [22], extends SASRec by introducing explicit user representations. LOCKER [8], enhances short-term user dynamics via local self-attention. Intent-aware Methods: NOVA [14], uses non-invasive self-attention to leverage side information.…”
Section: Experiments 41 Experimental Settingmentioning
confidence: 99%