Chen & Spence found in a previous study the time-courses and categorical specificity of crossmodal semantic congruency effects in pictures and printed words. To explore whether the time-courses and semantic consistency of audio and pictures can affect human's visual processing unconsciously, we developed a Python-based audio-visual semantic consistency program to study the impact of auditory cues on the breakthrough timing of printed words under the Two-Alternative Forced Choice continuous flash suppression (2AFC-CFS) paradigm. Specifically, auditory cues were presented at 5 different (-1000ms, -750ms, -500ms, -250ms, 0ms) stimulus onset asynchronies (SOAs) with respect to the visual targets. In addition, there were 5 different match types in our auditory cues and printed words: congruent, incongruent, correlated, noisy and no-sound. The results of the study shows that SOA and congruency have the main effect in the unconscious condition. At the same time, spoken words produced greater facilitation than naturalistic sounds auditory stimuli. When leading by 500ms or more, spoken words induced a slowly emerging congruency effect in the congruent. In all cases, however, auditory stimuli sped up recognition compared to no sound. These results therefore suggests that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system.
Deep learning has recently become very popular in various fields, and CNN and RNN are two very important elements in deep learning. They perform very well in the task of detecting ERPs (Event-Related Potentials) on EEG. Since EEG data contains many defects, such as huge variability among different people and low signal-to-noise ratio, it is very difficult to develop a system generalizable between subjects is still difficult. In this paper, we examined a deep learning Model (ADCRM) for EEG-based ERPs detection in subject-independent P300 detection scenario. In our model, the convolutional network module is constructed for encode the frequency domain and spatial characteristics of EEG signals and the recurrent network module is built for exploring the dynamic time domain characteristics, while a channel-wise attention module is integrated to focus on the most important part of the EEG channels. We trained the model with a relatively small number of datasets and examined its performance by comparing with the previously proposed methods. The experiment result shows that our model achieved the best performance among all the models we tested. This result indicates that our model can find the deep-seated EEG features and generalize to other subjects.
Humans can quickly and efficiently extract information from a complex natural scene. Rapid detection of animals is such an example, which is fast and accurate. We can see that animals have gender differences, and human beings also have gender differences, and they all appear in our real life. Therefore, we will use a two-alternative forced-choice paradigm (2AFC) to investigate the gender differences between the two targets. In our experiment, we balanced the various factors that could be taken into account and subjected the images to histogram equalization. We analyzed the reaction time of the subjects to stimuli of the target gender (male or female). We report two main findings. First, when the type of target (human or animal) was not considered, subjects had faster reaction times to male targets than to female targets. Second, gender differences were only significant for animals when the kind of object (human or animal) was considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.