2019
DOI: 10.1016/j.neucom.2019.02.009
|View full text |Cite
|
Sign up to set email alerts
|

Computing optimal discrete readout weights in reservoir computing is NP-hard

Abstract: We show NP-hardness of a generalized quadratic programming problem, which we called unconstrained n-ary quadratic programming (UNQP). This problem has recently become practically relevant in the context of novel memristor-based neuromorphic microchip designs, where solving the UNQP is a key operation for on-chip training of the neural network implemented on the chip. UNQP is the problem of finding a vector v ∈ S N which minimizes v T Q v + v T c, where S = {s 1 , . . . , s n } ⊂ Z is a given set of eligible pa… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…In our setting we, however, might additionally postulate that we work below the overparametrization barrier due to the Boolean entries of W DMD (k), which brings substantial rigidity into play. The price one pays is making the problem harder from a computational optimization viewpoint [26].…”
Section: Resultsmentioning
confidence: 99%
“…In our setting we, however, might additionally postulate that we work below the overparametrization barrier due to the Boolean entries of W DMD (k), which brings substantial rigidity into play. The price one pays is making the problem harder from a computational optimization viewpoint [26].…”
Section: Resultsmentioning
confidence: 99%
“…The second type of node is based on a sigmoid function. Sigmoid functions are common in neural network studies [22], although the form of our node is not the same as the most commonly used sigmoid function. The form of the nonlinearities in these two node types is different enough that our results should be general for different types of reservoir computers.…”
Section: S(t)mentioning
confidence: 99%
“…However, for on-chip learning, the weights will be physically realized by states of electronic synapses, which currently can be reliably set only to a very small number of discrete values. It has been recently proved that computing optimal discrete readout weights in reservoir computing is NP-hard and approximate (or heuristic) methods must be exploited to obtain high-quality solutions in reasonable time for practical uses [30]. The spike-based learning algorithm proposed by [31] is an example of such approximate solutions for FPGA implementations.…”
Section: Rc On Digital Neuromorphic Processorsmentioning
confidence: 99%