Crowd workers are human and thus sometimes make mistakes. In order to ensure the highest quality output, requesters often issue redundant jobs with gold test questions and sophisticated aggregation mechanisms based on expectation maximization (EM). While these methods yield accurate results in many cases, they fail on extremely difficult problems with local minima, such as situations where the majority of workers get the answer wrong. Indeed, this has caused some researchers to conclude that on some tasks crowdsourcing can never achieve high accuracies, no matter how many workers are involved. This paper presents a new quality-control workflow, called MicroTalk, that requires some workers to Justify their reasoning and asks others to Reconsider their decisions after reading counter-arguments from workers with opposing views. Experiments on a challenging NLP annotation task with workers from Amazon Mechanical Turk show that (1) argumentation improves the accuracy of individual workers by 20%, (2) restricting consideration to workers with complex explanations improves accuracy even more, and (3) our complete MicroTalk aggregation workflow produces much higher accuracy than simpler voting approaches for a range of budgets.
We discuss the development of Tactile Graphics with a Voice (TGV), a system used to access label information in tactile graphics using QR codes. Blind students often rely on tactile graphics to access textbook images. Many textbook images have a large number of text labels that need to be made accessible. In order to do so, we propose TGV, which uses QR codes to replace the text, as an alternative to Braille. The codes are read with a smartphone application. We evaluated the system with a longitudinal study where 10 blind and low-vision participants completed tasks using three different modes on the smartphone application: (1) no guidance, (2) verbal guidance, and (3) finger-pointing guidance. Our results show that TGV is an effective way to access text in tactile graphics, especially for those blind users who are not fluent in Braille. We also found that preferences varied greatly across the modes, indicating that future work should support multiple modes. We expand upon the algorithms we used to implement the finger pointing, algorithms to automatically place QR codes on documents. We also discuss work we have started on creating a Google Glass version of the application.
The "wisdom of crowds" dictates that aggregate predictions from a large crowd can be surprisingly accurate, rivaling predictions by experts. Crowds, meanwhile, are highly heterogeneous in their expertise. In this work, we study how the heterogeneous uncertainty of a crowd can be directly elicited and harnessed to produce more efficient aggregations from a crowd, or provide the same efficiency from smaller crowds. We present and evaluate a novel strategy for eliciting sufficient information about an individual's uncertainty: allow individuals to make multiple simultaneous guesses, and reward them based on the accuracy of their closest guess. We show that our multiple guesses scoring rule is an incentive-compatible elicitation strategy for aggregations across populations under the reasonable technical assumption that the individuals all hold symmetric log-concave belief distributions that come from the same location-scale family. We first show that our multiple guesses scoring rule is strictly proper for a fixed set of quantiles of any log-concave belief distribution. With properly elicited quantiles in hand, we show that when the belief distributions are also symmetric and all belong to a single location-scale family, we can use interquantile ranges to furnish weights for certainty-weighted crowd aggregation. We evaluate our multiple guesses framework empirically through a series of incentivized guessing experiments on Amazon Mechanical Turk, and find that certainty-weighted crowd aggregations using multiple guesses outperform aggregations using single guesses without certainty weights.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.