Ten years ago, we reported that SM, a patient with rare bilateral amygdala damage, showed an intriguing impairment in her ability to recognize fear from facial expressions. Since then, the importance of the amygdala in processing information about facial emotions has been borne out by a number of lesion and functional imaging studies. Yet the mechanism by which amygdala damage compromises fear recognition has not been identified. Returning to patient SM, we now show that her impairment stems from an inability to make normal use of information from the eye region of faces when judging emotions, a defect we trace to a lack of spontaneous fixations on the eyes during free viewing of faces. Although SM fails to look normally at the eye region in all facial expressions, her selective impairment in recognizing fear is explained by the fact that the eyes are the most important feature for identifying this emotion. Notably, SM's recognition of fearful faces became entirely normal when she was instructed explicitly to look at the eyes. This finding provides a mechanism to explain the amygdala's role in fear recognition, and points to new approaches for the possible rehabilitation of patients with defective emotion perception.
A neuroimaging study reveals how coupled brain oscillations at different frequencies align with quasi-rhythmic features of continuous speech such as prosody, syllables, and phonemes.
This article examines the human face as a transmitter of expression signals and the brain as a decoder of these expression signals. If the face has evolved to optimize transmission of such signals, the basic facial expressions should have minimal overlap in their information. If the brain has evolved to optimize categorization of expressions, it should be efficient with the information available from the transmitter for the task. In this article, we characterize the information underlying the recognition of the six basic facial expression signals and evaluate how efficiently each expression is decoded by the underlying brain structures.
In very fast recognition tasks, scenes are identified asfast as isolated objects. How can this efficiency be achieved, considering the large number of component objects and interfering factors, such as cast shadows and occlusions? Scene categories tend to have distinct and typical spatial organizations of their major components. If human perceptual stmclIIres were tuned to extract this information early in processing, a coarse-to-fine process could account for efficient scene recognition: A coarse description of the input scene (oriented "blobs" in a particular spatial organization) would initiate recognition before the identity of the objects is processed. We report two experimell1S that cOll1rast the respective roles of coarse and fine information in fast idell1ification of natural scenes. The first experiment investigated whether coarse and fine information were used at differell1 stages of processing. The second experiment tested whether coarse-to-fine processing accounts for fast scene categorization. The data suggest that recognition occurs at both coarse and fine spatial scales. By attending first to the coarse scale, the visual system can get a quick and rough estimate ofthe input to activate scene schemas in memory; attending to fine information allows refinement, or refutation, of ti,e raw estimate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.