2021 IEEE International Conference on Data Mining (ICDM) 2021
DOI: 10.1109/icdm51629.2021.00015
|View full text |Cite
|
Sign up to set email alerts
|

Learning Transferable User Representations with Sequential Behaviors via Contrastive Pre-training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
27
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 41 publications
(38 citation statements)
references
References 36 publications
2
27
0
Order By: Relevance
“…Appendix B illustrates preliminary results from the identification of stereotypical entries in the two aforementioned training corpora. Similar approaches based on fine-tuning for bias mitigation showed promising results in the recent literature [15,20,23]…”
Section: Extension To Transformer-based Modelsmentioning
confidence: 87%
“…Appendix B illustrates preliminary results from the identification of stereotypical entries in the two aforementioned training corpora. Similar approaches based on fine-tuning for bias mitigation showed promising results in the recent literature [15,20,23]…”
Section: Extension To Transformer-based Modelsmentioning
confidence: 87%
“…Extensive experiments conducted on 10 publicly available datasets from the UEA archive demonstrated that FormerTime could surpass previous strong baseline methods. In the future, we hope to empower the transferability of FormerTime [8].…”
Section: Discussionmentioning
confidence: 99%
“…In conjunction with images, we also observe that corresponding t-SNE embeddings are also collapsed near each class's mean in W space. Further, recent methods which have proposed the usage of contrastive learning for GANs, improve their data efficiency and prevent discriminator overfitting [14,16]. We also evaluate them by adding the contrastive conditioning method, which is D2D-CE loss-based on ReACGAN [17], to the baseline, where in results, we observe that the network omits to learn tail classes and produces head class images at their place (i.e., class confusion).…”
Section: Class Confusion and Class-specific Mode Collapse In Conditio...mentioning
confidence: 99%