2024
DOI: 10.1016/j.inffus.2023.102068
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical graph augmented stacked autoencoders for multi-view representation learning

Jianping Gou,
Nannan Xie,
Jinhua Liu
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…Sun et al used a BERT-based model to capture semantic features from contexts via fine-tuning, which significantly improves the working performance [16]. Text encoders are widely applied to various tasks [17,18]. Encouragingly, advances in contrastive learning hold great potential in natural language processing (NLP) tasks.…”
Section: Contextual Information Learningmentioning
confidence: 99%
“…Sun et al used a BERT-based model to capture semantic features from contexts via fine-tuning, which significantly improves the working performance [16]. Text encoders are widely applied to various tasks [17,18]. Encouragingly, advances in contrastive learning hold great potential in natural language processing (NLP) tasks.…”
Section: Contextual Information Learningmentioning
confidence: 99%