Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP 2019
DOI: 10.18653/v1/w19-4802
|View full text |Cite
|
Sign up to set email alerts
|

Sentiment Analysis Is Not Solved! Assessing and Probing Sentiment Classification

Abstract: Neural methods for SA have led to quantitative improvements over previous approaches, but these advances are not always accompanied with a thorough analysis of the qualitative differences. Therefore, it is not clear what outstanding conceptual challenges for sentiment analysis remain. In this work, we attempt to discover what challenges still prove a problem for sentiment classifiers for English and to provide a challenging dataset. We collect the subset of sentences that an (oracle) ensemble of state-of-the-a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
42
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 28 publications
(45 citation statements)
references
References 40 publications
3
42
0
Order By: Relevance
“…Dataset Evaluation Chen et al (2016) and Barnes et al (2019) also use model results to assess dataset difficulty for reading comprehension and sentiment analysis. Other work also explores bias in datasets and the adoption of shallow heuristics on biased datasets in natural language inference (Niven and Kao, 2019) and argument reasoning comprehension .…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Dataset Evaluation Chen et al (2016) and Barnes et al (2019) also use model results to assess dataset difficulty for reading comprehension and sentiment analysis. Other work also explores bias in datasets and the adoption of shallow heuristics on biased datasets in natural language inference (Niven and Kao, 2019) and argument reasoning comprehension .…”
Section: Related Workmentioning
confidence: 99%
“…A similar approach was used e.g. byBarnes et al (2019).7 See the supplemental material for details on the models, training procedure, hyperparameters, and task performance.8 https://tac.nist.gov/2014/KBP/ ColdStart/guidelines/TAC_KBP_2014_Slot_ Descriptions_V1.4.pdf…”
mentioning
confidence: 99%
“…2016; Farias and Rosso 2017; Barnes et al . 2019a). Here, we have shown that explicit training via hierarchical MTL is a viable way to incorporate some of this information.…”
Section: Discussionmentioning
confidence: 99%
“…Recent research, however, challenges the idea that end-to-end learning is able to fully capture compositional effects (Verma, Kim, and Walter 2018; Barnes et al . 2019a). It is therefore worth asking whether we can help the model by providing some form of explicit training on compositional phenomena in sentiment.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation