2020
DOI: 10.1007/978-981-15-6168-9_15
|View full text |Cite
|
Sign up to set email alerts
|

Multi-task Learning for Aspect and Polarity Recognition on Vietnamese Datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…In [53], the researchers introduced a multi-task model using BiLSTM with a self-attention mechanism for the ATE task and a CNN for the APC task. In addition, in [54], the authors proposed a multitask model based on deep learning for ATE and APC tasks on the Vietnamese dataset for restaurant and hotel domains. Another study [20] employed a fine-tuned BERT model as a multi-task model for ABSA with a self-attention layer on top of the BERT model.…”
Section: Multi-task Learning For Absamentioning
confidence: 99%
“…In [53], the researchers introduced a multi-task model using BiLSTM with a self-attention mechanism for the ATE task and a CNN for the APC task. In addition, in [54], the authors proposed a multitask model based on deep learning for ATE and APC tasks on the Vietnamese dataset for restaurant and hotel domains. Another study [20] employed a fine-tuned BERT model as a multi-task model for ABSA with a self-attention layer on top of the BERT model.…”
Section: Multi-task Learning For Absamentioning
confidence: 99%
“…In this section, we present the hyper-parameters of model and information of various pre-trained BERT models which are used in this paper. For pre-processing component, we re-implemented these steps as research works [27], [28] on Vietnamese ABSA. In addition, each pre-trained model has the specific pre-processing steps for the text data, therefore, we read and add carefully these pre-processing steps on the text input.…”
Section: B Experiments Settingsmentioning
confidence: 99%
“…Moreover, we can survey two well-known Vietnamese word segmenters, namely pyvi 1 and underthesea 2 scientifically unpublished up to now. For instances, the pyvi toolkit was used in the research of Van Thin et al [37] about sentiment analysis on VLSP2018-SA corpus [24] and research of Nguyen et al [38] on product reviews. Another instance, Nguyen et al [39] used underthesea toolkit for preprocessing their electronic products comments dataset.…”
Section: Introductionmentioning
confidence: 99%