Proceedings of the 2019 3rd International Conference on Natural Language Processing and Information Retrieval 2019
DOI: 10.1145/3342827.3342850
|View full text |Cite
|
Sign up to set email alerts
|

Zero-Shot Multilingual Sentiment Analysis using Hierarchical Attentive Network and BERT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Recently, many pre-trained models provided by open-source NLP libraries, such as BERT and NTLK, were introduced to minimize the efforts and resources required to learn general knowledge about the language and its structure (i.e., existing words, their meanings, and similarities). These transformer models are typically trained on very large monolingual or multilingual unannotated corpora (i.e., on pure texts) in a self-supervised manner and therefore are not adjusted for specific NLP problems [43]. With the help of transfer learning, the previously acquired general knowledge in the pre-trained word-or sentence-transformer models can be augmented and fine-tuned to tackle the specific NLP problems (including the sentiment analysis task).…”
Section: Related Workmentioning
confidence: 99%
“…Recently, many pre-trained models provided by open-source NLP libraries, such as BERT and NTLK, were introduced to minimize the efforts and resources required to learn general knowledge about the language and its structure (i.e., existing words, their meanings, and similarities). These transformer models are typically trained on very large monolingual or multilingual unannotated corpora (i.e., on pure texts) in a self-supervised manner and therefore are not adjusted for specific NLP problems [43]. With the help of transfer learning, the previously acquired general knowledge in the pre-trained word-or sentence-transformer models can be augmented and fine-tuned to tackle the specific NLP problems (including the sentiment analysis task).…”
Section: Related Workmentioning
confidence: 99%
“…Kuratov and Arkhipov [23] used transfer learning from a multilingual BERT model to monolingual model for SA in Russian language. Sarkar et al [24] proposed a Hierarchical Attentive Network using BERT for multilingual document-level SA. Recent studies report SA in code-mix Hindi-English text [25,26], which was provided as the SemEval 2020 Task 9 [27].…”
Section: Related Workmentioning
confidence: 99%
“…Zero-shot learning eliminates the need for manual data labeling, offering a promising avenue for automating sentiment analysis tasks. While zero-shot learning has shown promising results for general text classification tasks already [7,6], and it also has been tested for sentiment analysis tasks specifically [8,9,10,11,5], its practical application in digital humanities (DH) projects remains relatively scarce.…”
Section: Introductionmentioning
confidence: 99%