2017
DOI: 10.1007/s00778-017-0462-9
|View full text |Cite
|
Sign up to set email alerts
|

Argument discovery via crowdsourcing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 44 publications
(15 citation statements)
references
References 58 publications
0
15
0
Order By: Relevance
“…In this case, crowdsourcing is used to obtain a great number of labels quickly and inexpensively, as is the case of Hovy et al (). In relation to the problems solved by these applications, we find that most of them are related to open challenges in natural language processing (for an introduction to some of them, we refer the reader to Hirschberg and Manning ()): sentiment analysis of online media (Brew, Greene, & Cunningham, ; Salter‐Townshend & Murphy, ); joke's humor classification (Costa et al, ); temporal relation classification (Ng & Kan, ); word sense (Passonneau et al, ); marketing messaging classification on Twitter (Machedon et al, ); part‐of‐speech tagging (Hovy et al, ); identifying fake Amazon reviews (Fornaciari & Poesio, ); sequence labeling (Nguyen, Wallace, et al, ; Rodrigues et al, ); estimation of discourse segmentation (Huang et al, ); emotion estimation from narratives (Duan et al, ); crowdsourced translation (Yan et al, ); entity disambiguation (Li, Yang, et al, ; Nguyen, Duong, et al, ; Zhou et al, ); topic models (Liu et al, ; Rodrigues et al, ); personal assistants (Shin & Paek, ; Yang et al, ); corpus creation for Arabic dialects (Alshutayri & Atwell, ); and context sensitive tasks (Fang et al, ).…”
Section: Publication Areasmentioning
confidence: 99%
“…In this case, crowdsourcing is used to obtain a great number of labels quickly and inexpensively, as is the case of Hovy et al (). In relation to the problems solved by these applications, we find that most of them are related to open challenges in natural language processing (for an introduction to some of them, we refer the reader to Hirschberg and Manning ()): sentiment analysis of online media (Brew, Greene, & Cunningham, ; Salter‐Townshend & Murphy, ); joke's humor classification (Costa et al, ); temporal relation classification (Ng & Kan, ); word sense (Passonneau et al, ); marketing messaging classification on Twitter (Machedon et al, ); part‐of‐speech tagging (Hovy et al, ); identifying fake Amazon reviews (Fornaciari & Poesio, ); sequence labeling (Nguyen, Wallace, et al, ; Rodrigues et al, ); estimation of discourse segmentation (Huang et al, ); emotion estimation from narratives (Duan et al, ); crowdsourced translation (Yan et al, ); entity disambiguation (Li, Yang, et al, ; Nguyen, Duong, et al, ; Zhou et al, ); topic models (Liu et al, ; Rodrigues et al, ); personal assistants (Shin & Paek, ; Yang et al, ); corpus creation for Arabic dialects (Alshutayri & Atwell, ); and context sensitive tasks (Fang et al, ).…”
Section: Publication Areasmentioning
confidence: 99%
“…For instance, some statistical features such as the number of retweets or replies are considered [1,20,11]. Similarly, some user-level features are also used such as the credibility or readability of the users [1,9,14]. A different approach is to examine network level features to detect rumors.…”
Section: Related Workmentioning
confidence: 99%
“…In particular, Sarma et al [62] decompose crowdsourcing tasks to decrease the difficulty of questions, thereby improving the chance for workers to provide correct answers. Although such decomposition is useful for large-scale data, it might render the answer matrix sparser, which requires further customization [42,47,48]. Joglekar et al [25], on the other hand, measure the confidence interval of worker error rates, making the classification of worker types more fine-grained and thus the filtering of faulty workers more accurate.…”
Section: Crowd-expert Collaborationmentioning
confidence: 99%