2020
DOI: 10.48550/arxiv.2012.07881
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Perceptron Theory for Predicting the Accuracy of Neural Networks

Denis Kleyko,
Antonello Rosato,
E. Paxon Frady
et al.

Abstract: Many neural network models have been successful at classification problems, but their operation is still treated as a black box. Here, we developed a theory for one-layer perceptrons that can predict performance on classification tasks. This theory is a generalization of an existing theory for predicting the performance of Echo State Networks and connectionist models for symbolic reasoning known as Vector Symbolic Architectures.In this paper, we first show that the proposed perceptron theory can predict the pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

5
1

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 44 publications
0
6
0
Order By: Relevance
“…The capacity theory [Frady et al, 2018b] has recently been extended to also predict the accuracy of HDC/VSA models in classification tasks [Kleyko et al, 2020c]. [Thomas et al, 2021] presented bounds for the perfect retrieval of sets and sequences from their HVs.…”
Section: Information Capacity Of Hvsmentioning
confidence: 99%
“…The capacity theory [Frady et al, 2018b] has recently been extended to also predict the accuracy of HDC/VSA models in classification tasks [Kleyko et al, 2020c]. [Thomas et al, 2021] presented bounds for the perfect retrieval of sets and sequences from their HVs.…”
Section: Information Capacity Of Hvsmentioning
confidence: 99%
“…Therefore, the capacity theory can also be used to explain memory characteristics of reservoir computing. Moreover, using the abstract idea of dissecting a network into mapping and classifier parts [Papyan et al, 2020], it is possible to apply the capacity theory for predicting the accuracy of other types of neural networks (such as deep convolutional neural networks) [Kleyko et al, 2020c].…”
Section: Hdc/vsa For Explaining Neural Networkmentioning
confidence: 99%
“…HDC [11], [17], [18] also known as Vector Symbolic Architectures [19], is a neuro-inspired form of computing that represents concepts and their meanings as vectors in a highdimensional space (hypervectors). These hypervectors encode and store information [20], [21] using distributed representations and are often randomly sampled from the underlying space (e.g., binary). The space's high dimensionality ensures that any two random hypervectors are nearly orthogonal with extremely high probability.…”
Section: B Hyperdimensional Computingmentioning
confidence: 99%