BackgroundThe processing mechanisms of visual working memory (VWM) have been extensively explored in the recent decade. However, how the perceptual information is extracted into VWM remains largely unclear. The current study investigated this issue by testing whether the perceptual information was extracted into VWM via an integrated-object manner so that all the irrelevant information would be extracted (object hypothesis), or via a feature-based manner so that only the target-relevant information would be extracted (feature hypothesis), or via an analogous processing manner as that in visual perception (analogy hypothesis).Methodology/Principal FindingsHigh-discriminable information which is processed at the parallel stage of visual perception and fine-grained information which is processed via focal attention were selected as the representatives of perceptual information. The analogy hypothesis predicted that whereas high-discriminable information is extracted into VWM automatically, fine-grained information will be extracted only if it is task-relevant. By manipulating the information type of the irrelevant dimension in a change-detection task, we found that the performance was affected and the ERP component N270 was enhanced if a change between the probe and the memorized stimulus consisted of irrelevant high-discriminable information, but not if it consisted of irrelevant fine-grained information.Conclusions/SignificanceWe conclude that dissociated extraction mechanisms exist in VWM for information resolved via dissociated processes in visual perception (at least for the information tested in the current study), supporting the analogy hypothesis.
Visual working memory (VWM) maintains and manipulates a limited set of visual objects being actively used in visual processing. To explore whether and how the fine detailed information is stored in VWM, four experiments have been conducted while recording the contralateral delay activity (CDA), an event-related potential difference wave that reflects the information maintenance in VWM. The type of the remembered information was manipulated by adopting simple objects and complex objects as materials. We found the amplitude of CDA was modulated by object complexity: as the set size of memory array rose from 2 to 4, the amplitude of CDA stopped increasing for maintaining complex objects with detailed information, while continued increasing for storing highly discriminable simple objects. These results suggest that VWM can store the fine detailed information; however it can not store all the fine detailed information from 4 complex objects. It implies that the capacity of VWM is not only characterized by a fixed number of objects, there is at least one stage influenced by the detailed information contained in the objects. These results are further discussed within a two-stage storing model of VWM: different types of perceptual information (highly discriminable features and fine detailed features) are maintained in VWM via two distinctive mechanisms.
Human interactions are guided by continuous communication among the parties involved, in which verbal communication plays a primary role. However, speech does not necessarily reveal to whom it is addressed, especially for young infants who are unable to decode its semantic content. To overcome such difficulty, adults often explicitly mark their communication as infant-directed. In the present study we investigated whether ostensive signals, which would disambiguate the infant as the addressee of a communicative act, would modulate the brain responses of 6-monthold infants to speech and gestures in an ecologically valid setting. In Experiment 1, we tested whether the gaze direction of the speaker modulates cortical responses to infant-direct speech. To provide a naturalistic environment, two infants and their parents participated at the same time. In Experiment 2, we tested whether a similar modulation of the cortical response would be obtained by varying the intonation (infant versus adult directed speech) of the speech during face-to-face communication, one on one. The results of both experiments indicated that only the combination of ostensive signals (infant directed speech and direct gaze) led to enhanced brain activation. This effect was indicated by responses localized in regions known to be involved in processing auditory and visual aspects of social communication. This study also demonstrated the potential of fNIRS as a tool for studying neural responses in naturalistic scenarios, and for simultaneous measurement of brain function in multiple participants.
This research examined whether feeling awe weakens people's desire for money. Two experiments demonstrated that, as a self-transcendent emotion, awe decreased people's money desire. In Experiment 1, recalling a personal experience of awe makes people place less importance on money, compared with recalling an experience of happiness and recalling a neutral experience. In experiment 2, we examined different variants of awe, such as negative awe and non-nature awe. Viewing images that elicited awe, no matter what kind of awe, can induce people to put less effort into obtaining money. Process evidence suggested that awe's weakening of money desire was due to its power to make people transcend their mundane concerns. Our findings have implications for willingness to donate, price sensitivity, religious practices, and economic utilities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.