2022 RIVF International Conference on Computing and Communication Technologies (RIVF) 2022
DOI: 10.1109/rivf55975.2022.10013792
|View full text |Cite
|
Sign up to set email alerts
|

Investigating Monolingual and Multilingual BERT Models for Vietnamese Aspect Category Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…Liao et al [20] used RoBERTa (robustly optimized BERT pre-training approach) for contextual feature representation and combined it with 1D-CNN and cross-attention for aspect category classification. In [21], the authors performed different pre-trained language models (monolingual and multilingual) on the Vietnamese language.…”
Section: Related Workmentioning
confidence: 99%
“…Liao et al [20] used RoBERTa (robustly optimized BERT pre-training approach) for contextual feature representation and combined it with 1D-CNN and cross-attention for aspect category classification. In [21], the authors performed different pre-trained language models (monolingual and multilingual) on the Vietnamese language.…”
Section: Related Workmentioning
confidence: 99%
“…The amount of Vietnamese data makes up the majority of the dataset. For Vietnamese language, XLM-R is the best multilingual model based on the previous research [28].…”
Section: Approachmentioning
confidence: 99%
“…Multilingual model: We chose XLM-R over mT5 [21] and mBERT [22] because XLM-R generally performs better than mT5, mBERT at the same model size (see original paper for details). The work of [23] demonstrated that the XLM-R model is currently the best multilingual model for Vietnamese language.…”
Section: Classifier Architecturementioning
confidence: 99%