The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.
When asked to detect target letters while reading a text, participants miss more letters in frequently occurring function words than in less frequent content words. To account for this pattern of results, known as the missingletter effect, Greenberg, Healy, Koriat, and Kreiner (2004) proposed the guidance-organization (GO) model, which integrates the two leading models of the missing-letter effect while incorporating innovative assumptions based on the literature on eye movements during reading. The GO model was evaluated by monitoring the eye movements of participants while they searched for a target letter in a continuous text display. Results revealed the usual missing-letter effect, and many empirical benchmark effects in eye movement literature were observed. However, contrary to the predictions of the GO model, response latencies were longer for function words than for content words. Alternative models are discussed that can accommodate both error and response latency data for the missing-letter effect.
Previous studies have revealed that preschool-age children who are not yet readers pay little attention to written text in a shared book reading situation (see Evans & Saint-Aubin, 2005). The current study was aimed at investigating the constancy of these results across reading development, by monitoring eye movements in shared book reading, for children from kindergarten to Grade 4. Children were read books of three difficulty levels. The results revealed a higher proportion of time, a higher proportion of landing positions, and a higher proportion of readinglike saccades on the text as grade level increased and as reading skills improved. More precisely, there was a link between the difficulty of the material and attention to text. Children spent more time on a text that was within their reading abilities than when the book difficulty exceeded their reading skills.
When participants search for a target letter while reading, they make more omissions if the target letter is embedded in frequently used words or in the most frequent meaning of a polysemic word. According to the processing time hypothesis, this occurs because familiar words and meanings are identified faster, leaving less time for letter identification. Contrary to the predictions of the processing time hypothesis, with a rapid serial visual presentation procedure, participants were slower at detecting target letters for more frequent words or the most frequent meaning of a word (Experiments 1 and 2) or at detecting the word itself instead of a target letter (Experiment 3). In Experiments 4 and 5, participants self-initiated the presentation of each word, and the same pattern of results was observed as in Experiments 1 and 3. Positive correlations were also found between omission rate and response latencies.
Of the basic emotional facial expressions, fear is typically less accurately recognised as a result of being confused with surprise. According to the perceptual-attentional limitation hypothesis, the difficulty in recognising fear could be attributed to the similar visual configuration with surprise. In effect, they share more muscle movements than they possess distinctive ones. The main goal of the current study was to test the perceptual-attentional limitation hypothesis in the recognition of fear and surprise using eye movement recording and by manipulating the distinctiveness between expressions. Results revealed that when the brow lowerer is the only distinctive feature between expressions, accuracy is lower, participants spend more time looking at stimuli and they make more comparisons between expressions than when stimuli include the lip stretcher. These results not only support the perceptual-attentional limitation hypothesis but extend its definition by suggesting that it is not solely the number of distinctive features that is important but also their qualitative value.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.