Decision-making requires the accumulation of sensory evidence. However, in everyday life, sensory information is often ambiguous and contains decision-irrelevant features. This means that the brain must disambiguate sensory input and extract decision-relevant features. Sensory information processing and decision-making represent two subsequent stages of the perceptual decision-making process. While sensory processing relies on occipito-parietal neuronal activity during the earlier time window, decision-making lasts for a prolonged time, involving parietal and frontal areas. Although perceptual decision-making is being actively studied, its neuronal mechanisms under ambiguous sensory evidence lack detailed consideration. Here, we analyzed the brain activity of subjects accomplishing a perceptual decision-making task involving the classification of ambiguous stimuli. We demonstrated that ambiguity induced high frontal θ-band power for 0.15 s post-stimulus onset, indicating increased reliance on top-down processes, such as expectations and memory. Ambiguous processing also caused high occipito-parietal β-band power for 0.2 s and high fronto-parietal β-power for 0.35-0.42 s post-stimulus onset. We supposed that the former component reflected the disambiguation process while the latter reflected the decision-making phase. Our findings complemented existing knowledge about ambiguous perception by providing additional information regarding the temporal discrepancy between the different cognitive processes during perceptual decision-making.
A repeated presentation of an item facilitates its subsequent detection or identification, a phenomenon of priming. Priming may involve different types of memory and attention and affects neural activity in various brain regions. Here we instructed participants to report on the orientation of repeatedly presented Necker cubes with high (HA) and low (LA) ambiguity. Manipulating the contrast of internal edges, we varied the ambiguity and orientation of the cube. We tested how both the repeated orientation (referred to as a stimulus factor) and the repeated ambiguity (referred to as a top-down factor) modulated neuronal and behavioral response. On the behavioral level, we observed higher speed and correctness of the response to the HA stimulus following the HA stimulus and a faster response to the right-oriented LA stimulus following the right-oriented stimulus. On the neuronal level, the prestimulus theta-band power grew for the repeated HA stimulus, indicating activation of the neural networks related to attention and uncertainty processing. The repeated HA stimulus enhanced hippocampal activation after stimulus onset. The right-oriented LA stimulus following the right-oriented stimulus enhanced activity in the precuneus and the left frontal gyri before the behavioral response. During the repeated HA stimulus processing, enhanced hippocampal activation may evidence retrieving information to disambiguate the stimulus and define its orientation. Increased activation of the precuneus and the left prefrontal cortex before responding to the right-oriented LA stimulus following the right-oriented stimulus may indicate a match between their orientations. Finally, we observed increased hippocampal activation after responding to the stimuli, reflecting the encoding stimulus features in memory. In line with the large body of works relating the hippocampal activity with episodic memory, we suppose that this type of memory may subserve the priming effect during the repeated presentation of ambiguous images.
Incorporating brain-computer interfaces (BCIs) into daily life requires reducing the reliance of decoding algorithms on the calibration or enabling calibration with the minimal burden on the user. A potential solution could be a pre-trained decoder demonstrating a reasonable accuracy on the naive operators. Addressing this issue, we considered ambiguous stimuli classification tasks and trained an artificial neural network to classify brain responses to the stimuli of low and high ambiguity. We built a pre-trained classifier utilizing time-frequency features corresponding to the fundamental neurophysiological processes shared between subjects. To extract these features, we statistically contrasted electroencephalographic (EEG) spectral power between the classes in the representative group of subjects. As a result, the pre-trained classifier achieved 74% accuracy on the data of newly recruited subjects. Analysis of the literature suggested that a pre-trained classifier could help naive users to start using BCI bypassing training and further increased accuracy during the feedback session. Thus, our results contribute to using BCI during paralysis or limb amputation when there is no explicit user-generated kinematic output to properly train a decoder. In machine learning, our approach may facilitate the development of transfer learning (TL) methods for addressing the cross-subject problem. It allows extracting the interpretable feature subspace from the source data (the representative group of subjects) related to the target data (a naive user), preventing the negative transfer in the cross-subject tasks.
We trained an artificial neural network (ANN) to distinguish between correct and erroneous responses in the perceptual decision-making task using 32 EEG channels. The ANN input took the form of a 2D matrix where the vertical dimension reflected the number of EEG channels and the horizontal one—to the number of time samples. We focused on distinguishing the responses before their behavioural manifestation; therefore, we utilized EEG segments preceding the behavioural response. To deal with the 2D input data, ANN included a convolutional procedure transforming a 2D matrix into the 1D feature vector. We introduced three types of convolution, including 1D convolutions along the x- and y-axes and a 2D convolution along both axes. As a result, the F1-score for erroneous responses was above 88%, which confirmed the model’s ability to predict perceptual decision-making errors using EEG. Finally, we discussed the limitations of our approach and its potential use in the brain-computer interfaces to predict and prevent human errors in critical situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.