In this study, our primary aim is to determine empirically the role that skill plays in determining image aesthetics, and whether it can be deciphered from the ratings given by a diverse group of judges. To this end, we have collected and analyzed data from a large number of subjects (total 168) on a set of 221 of images taken by 33 photographers having different photographic skill and experience. We also experimented with the rating scales used by previous studies in this domain by introducing a binary rating system for collecting judges' opinions. The study also demonstrates the use of Amazon Mechanical Turk as a crowd-sourcing platform in collecting scientific data and evaluating the skill of the judges participating in the experiment. We use a variety of performance and correlation metrics to evaluate the consistency of ratings across different rating scales and compare our findings. A novel feature of our study is an attempt to define a threshold based on the consistency of ratings when judges rate duplicate images. Our conclusion deviates from earlier findings and our own expectations, with ratings not being able to determine skill levels of photographers to a statistically significant level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.