A problem in the study of face perception is that results can be confounded by poor stimulus control. Ideally, experiments should precisely manipulate facial features under study and tightly control irrelevant features. Software for 3D face modeling provides such control, but there is a lack of free and open source alternatives specifically created for face perception research. Here, we provide such tools by expanding the open-source software MakeHuman. We present a database of 27 identity models and six expression pose models (sadness, anger, happiness, disgust, fear, and surprise), together with software to manipulate the models in ways that are common in the face perception literature, allowing researchers to: (1) create a sequence of renders from interpolations between two or more 3D models (differing in identity, expression, and/or pose), resulting in a "morphing" sequence; (2) create renders by extrapolation in a direction of face space, obtaining 3D "anti-faces" and caricatures; (3) obtain videos of dynamic faces from rendered images; (4) obtain average face models; (5) standardize a set of models so that they differ only in selected facial shape features, and (6) communicate with experiment software (e.g., PsychoPy) to render faces dynamically online. These tools vastly improve both the speed at which face stimuli can be produced and the level of control that researchers have over face stimuli. We validate the face model database and software tools through a small study on human perceptual judgments of stimuli produced with the toolkit.
Neuroimaging research is growing rapidly, providing expansive resources for synthesizing data. However, navigating these dense resources is complicated by the volume of research articles and variety of experimental designs implemented across studies. The advent of machine learning algorithms and text-mining techniques has advanced automated labeling of published articles in biomedical research to alleviate such obstacles. As of yet, a comprehensive examination of document features and classifier techniques for annotating neuroimaging articles has yet to be undertaken. Here, we evaluated which combination of corpus (abstract-only or full-article text), features (bag-of-words or Cognitive Atlas terms), and classifier (Bernoulli naïve Bayes, k -nearest neighbors, logistic regression, or support vector classifier) resulted in the highest predictive performance in annotating a selection of 2,633 manually annotated neuroimaging articles. We found that, when utilizing full article text, data-driven features derived from the text performed the best, whereas if article abstracts were used for annotation, features derived from the Cognitive Atlas performed better. Additionally, we observed that when features were derived from article text, anatomical terms appeared to be the most frequently utilized for classification purposes and that cognitive concepts can be identified based on similar representations of these anatomical terms. Optimizing parameters for the automated classification of neuroimaging articles may result in a larger proportion of the neuroimaging literature being annotated with labels supporting the meta-analysis of psychological constructs.
Here, we take a computational approach to understand the mechanisms underlying face perception biases in depression. Thirty participants diagnosed with Major Depressive Disorder and thirty healthy control participants took part in three studies involving recognition of identity and emotion in faces. We used signal detection theory to determine whether any perceptual biases exist in depression aside from decisional biases. We found lower sensitivity to happiness in general, and lower sensitivity to both happiness and sadness with ambiguous stimuli. Our use of highly-controlled face stimuli ensures that such asymmetry is truly perceptual in nature, rather than the result of studying expressions with inherently different discriminability. We found no systematic effect of depression on the perceptual interactions between face expression and identity, suggesting that depression is not associated with difficulty attending to one of these dimensions while filtering out the other. We show through simulation that the overall pattern of results, as well as other biases found in the literature, can be explained by a neurocomputational model in which neural populations encoding positive expressions are selectively suppressed.
Here, we take a computational approach to understand the mechanisms underlying face perception biases in depression. Thirty participants diagnosed with Major Depressive Disorder and thirty healthy control participants took part in three studies involving recognition of identity and emotion in faces. We used signal detection theory to determine whether any perceptual biases exist in depression aside from decisional biases. We found lower sensitivity to happiness in general, and lower sensitivity to both happiness and sadness with ambiguous stimuli. Our use of highly-controlled face stimuli ensures that such asymmetry is truly perceptual in nature, rather than the result of studying expressions with inherently di↵erent discriminability. We found no systematic e↵ect of depression on the perceptual interactions between face expression and identity, suggesting that depression is not associated with di culty attending to one of these dimensions while filtering out the other. We show through simulation that the overall pattern of results, as well as other biases found in the literature, can be explained by a neurocomputational model in which neural populations encoding positive expressions are selectively suppressed.
A problem in the study of face perception is that results can be confounded by poor stimulus control. Ideally, experiments should precisely manipulate facial features under study and tightly control irrelevant features. Software for 3D face modeling provides such control, but there is a lack of free and open source alternatives specifically created for face perception research. Here, we provide such tools by expanding the open-source software MakeHuman. We present a database of 27 identity models and 6 expression pose models (sadness, anger, happiness, disgust, fear, and surprise), together with software to manipulate the models in ways that are common in the face perception literature, allowing researchers to: (1) create a sequence of renders from interpolations between two or more 3D models (di↵ering in identity, expression, and/or pose), resulting in a "morphing" sequence; (2) create renders by extrapolation in a direction of face space, obtaining 3D "anti-faces" and caricatures; (3) obtain videos of dynamic faces from rendered images; (4) obtain average face models; (5) standardize a set of models so that they di↵er only in selected facial shape features, and (6) communicate with experiment software (e.g., PsychoPy) to render faces dynamically online. These tools vastly improve both the speed at which face stimuli can be produced and the level of control that researchers have over face stimuli. We validate the face model database and software tools through a small study on human perceptual judgments of stimuli produced with the toolkit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.