2021
DOI: 10.3389/fcomp.2021.686050
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing Ecologically-Valid Dialogue Data for German

Abstract: Despite their increasing success, user interactions with smart speech assistants (SAs) are still very limited compared to human-human dialogue. One way to make SA interactions more natural is to train the underlying natural language processing modules on data which reflects how humans would talk to a SA if it was capable of understanding and producing natural dialogue given a specific task. Such data can be collected applying a Wizard-of-Oz approach (WOz), where user and system side are played by humans. WOz a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 43 publications
(86 reference statements)
0
1
0
Order By: Relevance
“…Moreover, as annotators are compensated not by the time they spend but rather by the number of annotated instances, they are compelled to work fast to maximize their monetary gain-which can negatively affect annotation quality (Drutsa et al, 2020) or even result in spamming (Hovy et al, 2013). It can also be difficult to find crowdworkers for the task at hand, for instance due to small worker pools for languages other than English (Pavlick et al, 2014;Frommherz and Zarcone, 2021) or because the task requires special qualifications (Tauchmann et al, 2020). Finally, the deployment of crowdsourcing remains ethically questionable due to undervalued payment (Fort et al, 2011;Cohen et al, 2016), privacy breaches, or even psychological harm on crowdworkers (Shmueli et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, as annotators are compensated not by the time they spend but rather by the number of annotated instances, they are compelled to work fast to maximize their monetary gain-which can negatively affect annotation quality (Drutsa et al, 2020) or even result in spamming (Hovy et al, 2013). It can also be difficult to find crowdworkers for the task at hand, for instance due to small worker pools for languages other than English (Pavlick et al, 2014;Frommherz and Zarcone, 2021) or because the task requires special qualifications (Tauchmann et al, 2020). Finally, the deployment of crowdsourcing remains ethically questionable due to undervalued payment (Fort et al, 2011;Cohen et al, 2016), privacy breaches, or even psychological harm on crowdworkers (Shmueli et al, 2021).…”
Section: Introductionmentioning
confidence: 99%