Objective. To develop an efficient, embedded electroencephalogram (EEG) channel selection approach for deep neural networks, allowing us to match the channel selection to the target model, while avoiding the large computational burdens of wrapper approaches in conjunction with neural networks. Approach. We employ a concrete selector layer to jointly optimize the EEG channel selection and network parameters. This layer uses a Gumbel-softmax trick to build continuous relaxations of the discrete parameters involved in the selection process, allowing them be learned in an end-to-end manner with traditional backpropagation. As the selection layer was often observed to include the same channel twice in a certain selection, we propose a regularization function to mitigate this behavior. We validate this method on two different EEG tasks: motor execution and auditory attention decoding. For each task, we compare the performance of the Gumbel-softmax method with a baseline EEG channel selection approach tailored towards this specific task: mutual information and greedy forward selection with the utility metric respectively. Main results. Our experiments show that the proposed framework is generally applicable, while performing at least as well as (and often better than) these state-of-the-art, task-specific approaches. Significance. The proposed method offers an efficient, task- and model-independent approach to jointly learn the optimal EEG channels along with the neural network weights.
We propose a dynamic sensor selection approach for deep neural networks (DNNs), which is able to derive an optimal sensor subset selection for each specific input sample instead of a fixed selection for the entire dataset. This dynamic selection is jointly learned with the task model in an end-to-end way, using the Gumbel-Softmax trick to allow the discrete decisions to be learned through standard backpropagation. We then show how we can use this dynamic selection to increase the lifetime of a wireless sensor network (WSN) by imposing constraints on how often each node is allowed to transmit. We further improve performance by including a dynamic spatial filter that makes the task-DNN more robust against the fact that it now needs to be able to handle a multitude of possible node subsets. Finally, we explain how the selection of the optimal channels can be distributed across the different nodes in a WSN. We validate this method on a use case in the context of body-sensor networks, where we use real electroencephalography (EEG) sensor data to emulate an EEG sensor network. We analyze the resulting trade-offs between transmission load and task accuracy.
Neural Source Coding (NSC) is a technique that exploits the modelling power of (deep) neural network for the purpose of source coding. Its goal is to transform the data into a space of low entropy, where they can be coded by classic entropy coding schemes. In this paper, our goal is to investigate the use of NSC in so-called neuro-sensor networks, i.e., a type of body-sensor network consisting of a collection of wireless sensor nodes that record brain activity at different scalp locations, e.g., via electroencephalography (EEG) sensors. All nodes wirelessly transmit their data to a fusion center, where inference is then performed on the joint sensor signals by a given deep neural network. The NSC parameters and inference network are learned jointly, optimizing the trade-off between accuracy and bitrate for a given application. We validate this method on a motor execution task in an emulated EEG sensor network and compare the resulting trade-offs with those obtained by directly quantizing the transmitted data to low-bit precision. We demonstrate that NSC yields more favorable trade-offs than straightforward quantization for very low bit depths and allows for large bandwidth gains at little loss in accuracy on the investigated brain-computer interface (BCI) task.
Many electroencephalography (EEG) applications rely on channel selection methods to remove the least informative channels, e.g., to reduce the amount of electrodes to be mounted, to decrease the computational load, or to reduce overfitting effects and improve performance. Wrapper-based channel selection methods aim to match the channel selection step to the target model, yet they require to re-train the model multiple times on different candidate channel subsets, which often leads to an unacceptably high computational cost, especially when said model is a (deep) neural network. To alleviate this, we propose a framework to embed the EEG channel selection in the neural network itself to jointly learn the network weights and optimal channels in an end-to-end manner by traditional backpropagation algorithms. We deal with the discrete nature of this new optimization problem by employing continuous relaxations of the discrete channel selection parameters based on the Gumbel-softmax trick. We also propose a regularization method that discourages selecting channels more than once. This generic approach is evaluated on two different EEG tasks: motor imagery brain-computer interfaces and auditory attention decoding. The results demonstrate that our framework is generally applicable, while being competitive with state-of-the art EEG channel selection methods, tailored to these tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.