2020 IEEE International Conference on Big Data (Big Data) 2020
DOI: 10.1109/bigdata50022.2020.9378221
|View full text |Cite
|
Sign up to set email alerts
|

Interpretation of Sentiment Analysis with Human-in-the-Loop

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…The significance of human-machine collaboration for sentiment analysis and determining the degree of agreement between several human and machine annotators was highlighted in a more recent study called "human-in-the-loop." References [25] and [26] analyzed the performance and agreement between off-theshelf sentiment analysis tools and reported the sentiment measurement done by an average of 5.63 coders proved satisfactory reliability of Krippendorff's alpha = .80, where the assessment was made on a sample of 148 randomly selected news items analyzing the sentiment of newspaper and website headlines, manually annotated by a team of 22 student coders who were initially trained. The research in [27] focused on more expressive annotations by conducting a two-phase annotation arrangement and showed that perceived emotions can be different from expressed emotions in an event-focused corpus, in turn affecting the performance of the classifier.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The significance of human-machine collaboration for sentiment analysis and determining the degree of agreement between several human and machine annotators was highlighted in a more recent study called "human-in-the-loop." References [25] and [26] analyzed the performance and agreement between off-theshelf sentiment analysis tools and reported the sentiment measurement done by an average of 5.63 coders proved satisfactory reliability of Krippendorff's alpha = .80, where the assessment was made on a sample of 148 randomly selected news items analyzing the sentiment of newspaper and website headlines, manually annotated by a team of 22 student coders who were initially trained. The research in [27] focused on more expressive annotations by conducting a two-phase annotation arrangement and showed that perceived emotions can be different from expressed emotions in an event-focused corpus, in turn affecting the performance of the classifier.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Yeruva et al [29] discussed differences between human annotators and machine annotators in sentiment analysis tasks. The authors presented a human-in-the-loop approach to explore human-machine collaboration for sentiment analysis.…”
Section: Sentiment Annotation Techniques In Educationmentioning
confidence: 99%
“…line, stanza and poem, were instead taken into consideration in PO-EMO and THU-FSPC (Tsinghua University-Fine-grained Sentimental Poetry Corpus) (Chen et al 2019). The only annotation at sentence level, that is an intermediate unit of analysis, is reported by Yeruva et al (2020) on Aeschylus's tragedies; r granularity of classification (from binary classes to wide sets of emotions): for example, Kabithaa has only two labels (positive and negative), while both Kāvi and PERC are based on the Indian concept of Navrasa that distinguishes nine emotions, both positive, such as shaanti (meaning 'peace'), and negative, such as raudra (meaning 'anger'); r perspective (annotation of the emotions as intended by the author or as perceived by the reader): both approaches are covered by the available datasets. It is interesting to notice that during a preliminary annotation of the Iliad, both perspectives were taken into consideration and annotated by two different groups.…”
Section: Related Workmentioning
confidence: 99%