2022
DOI: 10.1609/hcomp.v10i1.21984
|View full text |Cite
|
Sign up to set email alerts
|

Gesticulate for Health’s Sake! Understanding the Use of Gestures as an Input Modality for Microtask Crowdsourcing

Abstract: Human input is pivotal in building reliable and robust artificial intelligence systems. By providing a means to gather diverse, high-quality, representative, and cost-effective human input on demand, microtask crowdsourcing marketplaces have thrived. Despite the unmistakable benefits available from online crowd work, the lack of health provisions and safeguards, along with existing work practices threatens the sustainability of this paradigm. Prior work has investigated worker engagement and mental health, yet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 48 publications
0
3
0
Order By: Relevance
“…Authors developed a system that allows the interpretation of sign language-to-caption text, and also provides an opportunity for deaf and mute individuals to assist those that are unable to read sign language. More recently, and in closely related work Allen, Hu, and Gadiraju (2022) proposed the use of gestures as an input modality for microtask crowdsourcing.…”
Section: Crowdsourcing and Sign Languages (Sl)mentioning
confidence: 99%
“…Authors developed a system that allows the interpretation of sign language-to-caption text, and also provides an opportunity for deaf and mute individuals to assist those that are unable to read sign language. More recently, and in closely related work Allen, Hu, and Gadiraju (2022) proposed the use of gestures as an input modality for microtask crowdsourcing.…”
Section: Crowdsourcing and Sign Languages (Sl)mentioning
confidence: 99%
“…To this end, we conducted a set of experiments to explore the performance and perception of gesture inputs for microtasks [1]. We used three distinct microtasks, informed by the taxonomy described by Gadiraju et al [11].…”
Section: Background and Introductionmentioning
confidence: 99%
“…The models are sourced from MediaPipe. 1 The landmarks are converted into more directly interpretable data via multiple methods using Kalidokit. 2 Details on the augmentation can be found in the library documentation.…”
Section: Background and Introductionmentioning
confidence: 99%