Tone mapping operators (TMO) are functions which map high dynamic range (HDR) images to limited dynamic media while aiming to preserve the perceptual cues of the scene that govern its aesthetic quality. Evaluating aesthetic quality of TMOs is non-trivial due to the high subjectivity of preference involved. Traditionally, TMO aesthetic quality has been evaluated via subjective experiments in a controlled laboratory environment. However, the last decade has brought a surge in popularity of crowdsourcing as an alternative methodology to conduct subjective experiments. However, uncontrolled experiment conditions and unreliability of participant behaviour puts doubts on the trustworthiness of the collected data. In this study, we explore the possibility of using crowdsourcing platforms for subjective quality evaluation of TMOs. We have conducted three experiments with systematic changes to investigate the effect of experiment conditions and participant recruitment methods on the collected subjective data. Our results show that subjective evaluation of TMO aesthetic quality can be conducted on Prolific crowdsourcing platform with negligible differences in comparison to laboratory experiments. Furthermore, we provide objective conclusions about the effect of number of observers on the certainty of the pairwise comparison results.
The last decade has brought a surge in crowdsourcing platforms' popularity for the subjective quality evaluation of multimedia content. The lower need for intervention during the experiment and more expansive participant pools of crowdsourcing platforms encourage researchers to join this trend. However, the unreliability of the participant behaviors puts a barrier in the wide adoption of these platforms. Although many works exist to detect unreliable observers in rating experiments, there is still a lack of methodology for detecting unreliable observers in quality evaluation of multimedia content using pairwise comparison. In this work, we propose methods to identify irregular annotator behaviors in pairwise comparison paradigm. We compare the proposed methods' efficiency for two scenarios: quality evaluation of traditional 2D images and 3D interactive multimedia. We conducted two crowdsourcing experiments for two different Quality of Experience assessment tasks and inserted carefully designed synthetic spammer profiles to evaluate the proposed tools. Our results suggest that the detection of unreliable observers is highly task-dependent. The influence of the spammer behavior intensity and the proportion of spammers among the observers can be more severe on tasks with higher subjectivity. Based on these findings, we provide guidelines and recommendations towards developing spammer detection algorithms for subjective pairwise quality evaluation of multimedia content.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.