Unlike frozen snapshots of facial expressions that we often see in photographs, natural facial expressions are dynamic events that unfold in a particular fashion over time. But how important are the temporal properties of expressions for our ability to reliably extract information about a person's emotional state? We addressed this question experimentally by gauging human performance in recognizing facial expressions with varying temporal properties relative to that of a statistically optimal ("ideal") observer. We found that people recognized emotions just as efficiently when viewing them as naturally evolving dynamic events, temporally reversed events, temporally randomized events, or single images frozen in time. Our results suggest that the dynamic properties of human facial movements may play a surprisingly small role in people's ability to infer the emotional states of others from their facial expressions.
Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas.
Why do faces become easier to recognize with repeated exposure? Previous research has suggested that familiarity may induce a qualitative shift in visual processing from an independent analysis of individual facial features to an analysis that includes information about the relationships amongst features (Farah, Wilson, Drain, & Tanaka, 1998; Maurer, Grand, & Mondloch, 2002). We tested this idea by using a ‘summation-at-threshold’ technique (Gold, Mundy, & Tjan, 2012; Nandy & Tjan, 2008), in which an observer's ability to recognize each individual facial feature shown independently is used to predict their ability to recognize all of the features shown in combination. We find that, although people are better overall at recognizing familiar than unfamiliar faces, their ability to integrate information across features is similar for unfamiliar and highly familiar faces and is well predicted by their ability to recognize each of the facial features shown in isolation. These results are consistent with the idea that familiarity has a quantitative effect on the efficiency with which information is extracted from individual features, rather than qualitative effect on the process by which features are combined.
The Sleep Number smart bed uses embedded ballistocardiography, together with network connectivity, signal processing, and machine learning, to detect heart rate (HR), breathing rate (BR), and sleep vs. wake states. This study evaluated the performance of the smart bed relative to polysomnography (PSG) in estimating epoch-by-epoch HR, BR, sleep vs. wake, mean overnight HR and BR, and summary sleep variables. Forty-five participants (aged 22–64 years; 55% women) slept one night on the smart bed with standard PSG. Smart bed data were compared to PSG by Bland–Altman analysis and Pearson correlation for epoch-by-epoch HR and epoch-by-epoch BR. Agreement in sleep vs. wake classification was quantified using Cohen’s kappa, ROC analysis, sensitivity, specificity, accuracy, and precision. Epoch-by-epoch HR and BR were highly correlated with PSG (HR: r = 0.81, |bias| = 0.23 beats/min; BR: r = 0.71, |bias| = 0.08 breaths/min), as were estimations of mean overnight HR and BR (HR: r = 0.94, |bias| = 0.15 beats/min; BR: r = 0.96, |bias| = 0.09 breaths/min). Calculated agreement for sleep vs. wake detection included kappa (prevalence and bias-adjusted) = 0.74 ± 0.11, AUC = 0.86, sensitivity = 0.94 ± 0.05, specificity = 0.48 ± 0.18, accuracy = 0.86 ± 0.11, and precision = 0.90 ± 0.06. For all-night summary variables, agreement was moderate to strong. Overall, the findings suggest that the Sleep Number smart bed may provide reliable metrics to unobtrusively characterize human sleep under real life-conditions.
Important perceptual judgments are often made by combining the opinions of several individuals to make a collective decision, such as when teams of physicians make diagnoses based on medical images. Although group-level decisions are generally superior to the decisions made by individuals, it remains unclear whether collective decision making is most effective when information is redundantly provided to all individuals within a group, or when each individual is responsible for only a portion of the total information. Here, we test this idea by having individuals and groups of different sizes make perceptual judgments about the presence of a weak visual signal. We found that groups viewing the entirety of information significantly outperformed groups that viewed limited portions of information, and that this difference in performance could be accounted for by a simple internal noise-averaging model. However, noise averaging alone was insufficient to account for improvements in individual and group-level performance as group size varied. These results indicate that sharing redundant information can enhance the quality of individual perceptual judgments and lead to better group decision making than dividing information across members of a group.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.