2014
DOI: 10.1007/978-3-319-06028-6_10
|View full text |Cite
|
Sign up to set email alerts
|

Measuring the Effectiveness of Gamesourcing Expert Oil Painting Annotations

Abstract: Abstract. Tasks that require users to have expert knowledge are difficult to crowdsource. They are mostly too complex to be carried out by non-experts and the available experts in the crowd are difficult to target. Adapting an expert task into a non-expert user task, thereby enabling the ordinary "crowd" to accomplish it, can be a useful approach. We studied whether a simplified version of an expert annotation task can be carried out by non-expert users. Users conducted a game-style annotation task of oil pain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…Authors found crowd annotators, drawn from museum attendees, to use a different vocabulary than professional ones, but that such annotations were effective to improve the retrieval of the artworks. In [15], crowds…”
Section: Related Workmentioning
confidence: 97%
See 1 more Smart Citation
“…Authors found crowd annotators, drawn from museum attendees, to use a different vocabulary than professional ones, but that such annotations were effective to improve the retrieval of the artworks. In [15], crowds…”
Section: Related Workmentioning
confidence: 97%
“…Previous works investigated how crowds can support the artwork annotation process [13,14,15]. For instance, "The Steve project" [13] studied crowd tagging of collections from more than 12 USA-based museums and compared crowd and professional taggers.…”
Section: Related Workmentioning
confidence: 99%
“…We investigated whether crowd workers can perform a simplified version of an expert task if they are given assistance and how well they perform compared to experts [8]. We showed that the crowd workers' contributions were largely in line with the experts' judgements and that some cases of strong disagreement indicated need for re-evaluation on the experts' side.…”
Section: Progress Made So Farmentioning
confidence: 98%
“…For tasks with higher complexity or required expert knowledge, however, users must be trained in order to fully understand the task and provide high quality contributions. Our initial study [8] suggests that by combining these three components and creating feedback loops between them, we can create a system that successively leads to substantial improvements in data sets.…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation