Proceedings of the 19th International Database Engineering &Amp; Applications Symposium on - IDEAS '15 2014
DOI: 10.1145/2790755.2790761
|View full text |Cite
|
Sign up to set email alerts
|

Automatic vs. Crowdsourced Sentiment Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0
2

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 7 publications
0
15
0
2
Order By: Relevance
“…Crowd cost is the amount paid to the platform and the crowd workers; completion time refers to the time from when the project was launched to the time when all required responses were received, and accuracy is the degree of similarity of the results compared to a gold standard. We use the manual evaluation of the same comments in [61] as the gold standard to measure the method's accuracy. As presented in Table 1, the paid method completed significantly faster than the unpaid version while the accuracy of the two crowdsourced methods is marginally similar.…”
Section: Resultsmentioning
confidence: 99%
“…Crowd cost is the amount paid to the platform and the crowd workers; completion time refers to the time from when the project was launched to the time when all required responses were received, and accuracy is the degree of similarity of the results compared to a gold standard. We use the manual evaluation of the same comments in [61] as the gold standard to measure the method's accuracy. As presented in Table 1, the paid method completed significantly faster than the unpaid version while the accuracy of the two crowdsourced methods is marginally similar.…”
Section: Resultsmentioning
confidence: 99%
“…Free typing loose associations is just a lot easier than making a decision about the degree of match to a predefined category (especially hierarchical ones). It's like 90% of the value of a proper taxonomy but 10 times simpler" (Bishr & Kuhn 2007, p18 (Borromeo et al 2015). Machine learning classifiers still need training for different events as well as validation (Palen et al 2012;Cobb et al 2013), so expectations of greater responsiveness or accuracy from machines alone may be unrealistic.…”
Section: Discussionmentioning
confidence: 99%
“…The second core part in sentiment analysis, is sentiment classification -a classification that occurs at phrase/sentence/submission level, and is usually based on the aggregation of the term's labelled emotions. As with lexicon acquisition, the classification task can be automated [24,25,28,60,17,59] or performed manually [29,43,26,13].…”
Section: Introductionmentioning
confidence: 99%
“…Manually labelled classification can achieve high accuracy, but it requires additional resources, and is not easily scalable. On the other hand, automated processes are scalable but have lower accuracy [13,27].…”
Section: Introductionmentioning
confidence: 99%