2018 Conference on Cognitive Computational Neuroscience 2018
DOI: 10.32470/ccn.2018.1276-0
|View full text |Cite
|
Sign up to set email alerts
|

Low-Rank Nonlinear Decoding of μ-ECoG from the Primary Auditory Cortex

Abstract: This paper considers the problem of neural decoding from parallel neural measurements systems such as micro-electrocorticography (µ-ECoG). In systems with large numbers of array elements at very high sampling rates, the dimension of the raw measurement data may be large. Learning neural decoders for this high-dimensional data can be challenging, particularly when the number of training samples is limited. To address this challenge, this work presents a novel neural network decoder with a low-rank structure in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 16 publications
1
7
0
Order By: Relevance
“…Our contribution is consistent with recent results for linear RNNs (Emami et al 2021;Cohen-Karlik et al 2023), on their ability to extrapolate to longer sequences by training on short sequences with stochastic gradient descent. Thus we provide a different perspective, while extending to a more general class of models.…”
Section: Introductionsupporting
confidence: 91%
“…Our contribution is consistent with recent results for linear RNNs (Emami et al 2021;Cohen-Karlik et al 2023), on their ability to extrapolate to longer sequences by training on short sequences with stochastic gradient descent. Thus we provide a different perspective, while extending to a more general class of models.…”
Section: Introductionsupporting
confidence: 91%
“…Our work improves on this line of research by introducing the idea of width-dependent kernels, which is especially well-suited to the context of DNNs where double descent manifests as the network width tends to infinity. Recent studies of the double-descent phenomenon have focused on random features regressions in the case of shallow networks (Gerbelot, Abbara, and Krzakala 2020; Liao, Couillet, and Mahoney 2020;Emami et al 2020;Gerace et al 2020;Li, Zhou, and Gretton 2021;Adlam and Pennington 2020b;Belkin, Hsu, and Xu 2020;Chen and Schaeffer 2021;Bosch et al 2022;D'Ascoli et al 2020), or kernel regression with no dependence on the width (Liu, Liao, and Suykens 2020;Mallinar et al 2022).…”
Section: Related Workmentioning
confidence: 99%
“…Statistical inference for generalized linear models (GLMs) with large has been extensively studied. There are four typical settings on : moderately high-dimensional with small / (Liang and Du, 2012;Hsu and Mazumdar, 2023;Kuchelmeister and van de Geer, 2024); sparse and high-dimensional (van de Geer, 2008;James and Radchenko, 2009;Levy and Abramovich, 2023); non-sparse and proportionally high-dimensional , i.e., / → ∈ (0, ∞) Salehi et al, 2019;Aubin et al, 2020;Emami et al, 2020;Sawaya et al, 2023); and non-sparse and possibly infinite-dimensional (Wu et al, 2023). Note that our study adopts the last setting.…”
Section: High-dimensional Generalized Linear Modelsmentioning
confidence: 99%