Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and selfassessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students' grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers' work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.
We introduce an algorithm for automatic selection of semantically-resonant colors to represent data (e.g., using blue for data about "oceans", or pink for "love"). Given a set of categorical values and a target color palette, our algorithm matches each data value with a unique color. Values are mapped to colors by collecting representative images, analyzing image color distributions to determine value-color affinity scores, and choosing an optimal assignment. Our affinity score balances the probability of a color with how well it discriminates among data values. A controlled study shows that expert-chosen semantically-resonant colors improve speed on chart reading tasks compared to a standard palette, and that our algorithm selects colors that lead to similar gains. A second study verifies that our algorithm effectively selects colors across a variety of data categories.
This article presents the results of an online creativity experiment (N = 81) that examines the effect of example timing on creative output. In the between-subjects experiment, participants drew animals to inhabit an alien Earth-like planet while being exposed to examples early, late, or repeatedly during the experiment. We find that exposure to examples increases conformity. Early exposure to examples improves creativity (measured by the number of common and novel features in drawings, and subjective ratings by independent raters). Repeated exposure to examples interspersed with prototyping leads to even better results. However, late exposure to examples increases conformity, but does not improve creativity.
When a smart device talks, what should its voice sound like? Voice-enabled devices are becoming a ubiquitous presence in our everyday lives. Simultaneously, speech synthesis technology is rapidly improving, making it possible to generate increasingly varied and realistic computerized voices. Despite the flexibility and richness of expression that technology now affords, today's most common voice assistants often have female-sounding, polite, and playful voices by default. In this paper, we examine the social consequences of voice design, and introduce a simple research framework for understanding how voice affects how we perceive and interact with smart devices. Based on the foundational paradigm of computers as social actors, and informed by research in human-robot interaction, this framework demonstrates how voice design depends on a complex interplay between characteristics of the user, device, and context. Through this framework, we propose a set of guiding questions to inform future research in the space of voice design for smart devices. CCS Concepts: • Human-centered computing → HCI theory, concepts and models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.