Proceedings of the 15th International Conference on Intelligent User Interfaces 2010
DOI: 10.1145/1719970.1720006
|View full text |Cite
|
Sign up to set email alerts
|

Towards maximizing the accuracy of human-labeled sensor data

Abstract: We present two studies that evaluate the accuracy of human responses to an intelligent agent's data classification questions. Prior work has shown that agents can elicit accurate human responses, but the applications vary widely in the data features and prediction information they provide to the labelers when asking for help. In an initial analysis of this work, we found the five most popular features, namely uncertainty, amount and level of context, prediction of an answer, and request for user feedback. We p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(19 citation statements)
references
References 25 publications
0
19
0
Order By: Relevance
“…The feedback from the learner affects the teacher's examples towards corrections of wrong classifications. Subsequent research in this area has extended interactive ML to a great number of systems, addressing a range of challenges from giving more control to users to providing them with useful information during interactions [34,14,22,16,8,31]. Our work shares the goal of this line of research, which is improving the effectiveness and usability of ML systems that interact with humans.…”
Section: Interactive Machine Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The feedback from the learner affects the teacher's examples towards corrections of wrong classifications. Subsequent research in this area has extended interactive ML to a great number of systems, addressing a range of challenges from giving more control to users to providing them with useful information during interactions [34,14,22,16,8,31]. Our work shares the goal of this line of research, which is improving the effectiveness and usability of ML systems that interact with humans.…”
Section: Interactive Machine Learningmentioning
confidence: 99%
“…Amershi et al [1] investigate how the grouping and order in the presentation of large data sets to users, impact their input to a learner. Rosenthal and Dey [31] investigate how additional information provided to users when requesting labels, can reduce errors in their inputs. Our approach differs from these in the content and medium of information provided to users to improve their inputs.…”
Section: Interactive Machine Learningmentioning
confidence: 99%
“…Similar to the works above, in this paper we employ the concept of combining a predicting stage (human, auto, and hybrid) and a verification stage through the CrowdFlower workers subsequently. To improve the crowdsourcing workflow and to ensure high quality answers, various machine learning mechanisms have been recently introduced [25,26,27,28]. Closely to our task, ZenCrowd [8] explore the combination of probabilistic reasoning and crowdsourcing to improve the quality of entity linking.…”
Section: Crowdsourcing Task Designmentioning
confidence: 99%
“…In previous comparative studies, Mechanical Turk has been shown to be a reliable way of gathering data for visualization studies (Heer & Bostock, 2010), behavioral studies (Mason & Suri, 2011), and ranking of perceptual data (Rosenthal & Dey, 2010). Further, it significantly increases the number of participants used for data collection at very low cost (Kittur et al, 2008).…”
Section: Using Crowdsourcing For Experimentsmentioning
confidence: 99%