2015
DOI: 10.1016/j.comnet.2015.05.021
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing vs. laboratory experiments – QoE evaluation of binaural playback in a teleconference scenario

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…It has to be noted that each worker provided 13.2 ratings on average, while each in-lab listener provided 29 ratings. Lower participant reliability in CS compared to in-lab tests was also reported in [20,18].…”
Section: Cs-sca Test Resultsmentioning
confidence: 67%
See 2 more Smart Citations
“…It has to be noted that each worker provided 13.2 ratings on average, while each in-lab listener provided 29 ratings. Lower participant reliability in CS compared to in-lab tests was also reported in [20,18].…”
Section: Cs-sca Test Resultsmentioning
confidence: 67%
“…Other studies using CS for collecting rates on a Likert scale concentrate on audio quality assessments [16,7] employing the discrete 5-point absolute-category-rating (ACR) for subjective Mean Opinion Scores), on voice naturalness [17], and on perceived Quality of Experience (QoE) in a teleconference system [18]. Different to using a Likert scale, the approach in [19] was to ask CS participants to rate correct or incorrect realizations of the /r/ sound in words.…”
Section: Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…T. Hoßfeld et al has stated [17] that in general, every crowdsourcing task suffers from bad quality results. Though, even if the task is designed effectively, subjects might still submit unreliable and misleading survey results [41]. Establishing a trustworthy cluster of subjects (either paid or voluntarily registered) across distributed geographical locations that access the service via different network operators will establish a good understanding of the QoE of the service.…”
Section: Experiments Methodologymentioning
confidence: 99%
“…[9,10,11,12]) as well as audio and (synthetic) voice quality testing (e.g. [13,14,15]). In contrast, Crowdsourcing has been much less used for Web QoE research (e.g.…”
Section: Related Workmentioning
confidence: 99%