2023
DOI: 10.2139/ssrn.4449865
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised Continual Learning with Multi-View Data Fusion for Dynamic Network Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…Pre-trained language models have been widely used in NLP tasks, but they first need to be fine-tuned on specific tasks and are task-specific. Thus, various pre-trained embedding models have been proposed, to generate more general embeddings for different tasks, which are mainly based on contrastive learning frameworks [35], [36], [37]. These models are based on pre-trained models and fine-tuned on data from various NLP tasks but in a unified format, in order to be directly applied to downstream tasks without fine-tuning.…”
Section: Pre-trained Embedding Modelsmentioning
confidence: 99%
“…Pre-trained language models have been widely used in NLP tasks, but they first need to be fine-tuned on specific tasks and are task-specific. Thus, various pre-trained embedding models have been proposed, to generate more general embeddings for different tasks, which are mainly based on contrastive learning frameworks [35], [36], [37]. These models are based on pre-trained models and fine-tuned on data from various NLP tasks but in a unified format, in order to be directly applied to downstream tasks without fine-tuning.…”
Section: Pre-trained Embedding Modelsmentioning
confidence: 99%