Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2010
DOI: 10.1145/1753326.1753504
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing debate performance via aggregated twitter sentiment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
188
0
7

Year Published

2010
2010
2014
2014

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 341 publications
(207 citation statements)
references
References 13 publications
2
188
0
7
Order By: Relevance
“…Tweets Positive Negative Obama McCain Debate (OMD) [9] 1081 393 688 Health Care Reform (HCR) [19] 1354 397 957 Standford Sentiment Gold Standard (STS-Gold) [16] 2034 632 1402 Table 1. Twitter datasets used for the evaluation Sentiment Lexicons: As describe in Section 4, initial sentiments of terms in SentiCircle are extracted from a sentiment lexicon (prior sentiment).…”
Section: Datasetmentioning
confidence: 99%
“…Tweets Positive Negative Obama McCain Debate (OMD) [9] 1081 393 688 Health Care Reform (HCR) [19] 1354 397 957 Standford Sentiment Gold Standard (STS-Gold) [16] 2034 632 1402 Table 1. Twitter datasets used for the evaluation Sentiment Lexicons: As describe in Section 4, initial sentiments of terms in SentiCircle are extracted from a sentiment lexicon (prior sentiment).…”
Section: Datasetmentioning
confidence: 99%
“…To validate our above observation, we analyse the human sentiment votes on the 58 entities in STS-Gold dataset. 7 Figure 5 shows entities under patterns 2,5 and 6 along with number of times they receive negative, positive and neutral sentiment in tweets according to the three human coders. We observe that entities in patterns 2 and 6 occur very infrequently in tweets, yet with consistent sentiment.…”
Section: Within-pattern Sentiment Consistencymentioning
confidence: 99%
“…Shamma and Diakopoulos [15] showed that the social structure and the conversational content of tweets can provide insight into a media event's structure and semantic content: quantitatively through activity peaks and qualitatively through keyword mining. Diakoloulos et al [16], after having collected tweets during the U.S. State of the Union presidential address in 2010, used them to annotate a video of the event.…”
Section: Crowdsourcing Media Annotation and Motivationmentioning
confidence: 99%
“…Instead, for PT [34], we needed to find a solution that would work in multiple and mixed languages, was simple to deploy, and would work on a variety of specialized corpora like philosophy, aesthetics, or design. Diakopoulos and Shamma [15] used another approach for sentiment classification: they used Amazon Mechanical Turk to perform hand-annotated sentiment classification on tweets. Turkers were compensated $0.05 per ten tweets analyzed.…”
Section: Tweet Content Analysismentioning
confidence: 99%
See 1 more Smart Citation