2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) 2018
DOI: 10.1109/percomw.2018.8480236
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Annotation Consistency in the Wild

Abstract: The process of human annotation of sensor data is at the base of research areas such as participatory sensing and mobile crowdsensing. While much research has been devoted to assessing the quality of sensor data, the same cannot be said about annotations, which are fundamental to obtain a clear understanding of users experience. We present an evaluation of an interdisciplinary annotation methodology allowing users to continuously annotate their everyday life. The evaluation is done on a dataset from a project … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…The consistency of annotations related to data collected by sensors on a smartphone was studied in [10]. The main goal was to relate the daily behavior of students with their academic performance, using information about their locations and movements.…”
Section: Related Workmentioning
confidence: 99%
“…The consistency of annotations related to data collected by sensors on a smartphone was studied in [10]. The main goal was to relate the daily behavior of students with their academic performance, using information about their locations and movements.…”
Section: Related Workmentioning
confidence: 99%
“…Such an assumption is becoming less and less sustainable in a world where non-expert users are involved in annotating a constant stream of data, be it the flow of news in the Web (see, e.g., [10]), or the flow of sensor data in pervasive and ubiquitous computing (see, e.g., [15,17]). In the case of the latter scenario, the same input can be proposed multiple times, resulting in it being often labeled differently even by the same user [18].…”
Section: Introductionmentioning
confidence: 99%
“…Such an assumption is becoming less and less sustainable in a world where non-expert users are involved in annotating a constant stream of data, be it the flow of news in the Web (see, e.g., [10]), or the flow of sensor data in pervasive and ubiquitous computing (see, e.g., [15,17]). In the case of the latter scenario, the same input can be proposed multiple times, resulting in it being often labeled differently even by the same user [18].The research presented here is part of, and motivated by, a long-term series of experiments (two experiments per year for two years) aimed at studying the University student life and at correlating the students' behaviors with their academic performance, measured concerning grades and credits passed. The long-term goal is to support students who should increase their performance, thus also minimizing the possibility of a failure in getting the degree.…”
mentioning
confidence: 99%