Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1257
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Abstract: Aspect-based sentiment analysis involves the recognition of so called opinion target expressions (OTEs). To automatically extract OTEs, supervised learning algorithms are usually employed which are trained on manually annotated corpora. The creation of these corpora is labor-intensive and sufficiently large datasets are therefore usually only available for a very narrow selection of languages and domains. In this work, we address the lack of available annotated data for specific languages by proposing a zero-s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…We treat English as the source language and other languages as targets. Following existing works to simulate true unsupervised setting (Jebbara and Cimiano, 2019;Hu et al, 2020), we use the English validation set in all experiments for the model selection. The original workshop also provides training data for each target language as well 6 , we thus discard the label of the training set in each target language and use the raw sentences as the unlabeled data, similar with previous studies (Wang and Pan, 2018;Wu et al, 2020).…”
Section: Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…We treat English as the source language and other languages as targets. Following existing works to simulate true unsupervised setting (Jebbara and Cimiano, 2019;Hu et al, 2020), we use the English validation set in all experiments for the model selection. The original workshop also provides training data for each target language as well 6 , we thus discard the label of the training set in each target language and use the raw sentences as the unlabeled data, similar with previous studies (Wang and Pan, 2018;Wu et al, 2020).…”
Section: Datasetmentioning
confidence: 99%
“…Another line of work uses the cross-lingual word embeddings trained on large parallel bilingual corpus (Ruder et al, 2019). By switching the word embeddings between different languages, the model can be used in a language-agnostic manner (Barnes et al, 2016;Akhtar et al, 2018;Wang and Pan, 2018;Jebbara and Cimiano, 2019). Wang and Pan (2018) Table 6: Example of different label projection methods with French as the target language.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For the sentiment and document classification task, we train a fully-connected layer on top of the output of the [CLS] token, which is considered to be the representation of the input sequence. For the opinion target extraction task, we formulate it as sequence labeling task (Agerri and Rigau, 2019;Jebbara and Cimiano, 2019). To extract such opinion target tokens is to classify each token into one of the following: Beginning, Inside and Outside of an aspect.…”
Section: Pre-trained Modelsmentioning
confidence: 99%
“…One important application of crosslingual embeddings has been found for transferring models trained on a high-resource language to a low-resource one (Lin et al, 2019;Schuster et al, 2019a;Artetxe and Schwenk, 2019). The latest multilingual transformer encoders such as BERT (Devlin et al, 2019) and XLM (Conneau et al, 2020) have made it possible to develop robust crosslingual models through zero-shot learning that requires no labeled training data on the target side (Jebbara and Cimiano, 2019;Chidambaram et al, 2019;Chi et al, 2020). However, these approaches tend not to work as well for languages whose words cannot be easily aligned.…”
Section: Introductionmentioning
confidence: 99%