2017
DOI: 10.15407/kvt188.02.005
|View full text |Cite
|
Sign up to set email alerts
|

Neural Autoassociative Memories for Binary Vectors: A Survey

Abstract: Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension.The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural ne… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 143 publications
(233 reference statements)
0
15
0
Order By: Relevance
“…However, its unique property is formation of the second main hierarchy type, i.e., of generalization (class-instance) hierarchies. It is formed when many similar (correlated) HVs are stored (memorized), based on the idea of Hebb's cell assemblies including cores (subsets of HV 1-components often met together, corresponding to, e.g., typical features of categories and object-prototypes) and fringes (features of specific objects), see [Rachkovskij et al, 2013], [Gritsenko et al, 2017].…”
Section: Associative-projective Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…However, its unique property is formation of the second main hierarchy type, i.e., of generalization (class-instance) hierarchies. It is formed when many similar (correlated) HVs are stored (memorized), based on the idea of Hebb's cell assemblies including cores (subsets of HV 1-components often met together, corresponding to, e.g., typical features of categories and object-prototypes) and fringes (features of specific objects), see [Rachkovskij et al, 2013], [Gritsenko et al, 2017].…”
Section: Associative-projective Neural Networkmentioning
confidence: 99%
“…The analogical episodes should include two major types of hierarchy, the compositional ("part-whole") one, and the generalization ("is-a") hierarchy. We believe that associative memories [Gritsenko et al, 2017] may provide one way to form "is-a" hierarchies (see some discussion in [Rachkovskij et al, 2013]), but this topic has not yet been studied extensively in the context of HDC/VSA. In terms of forming part-whole hierarchies from 2D images, a recent conceptual proposal had been given in [Hinton, 2021].…”
Section: Open Issuesmentioning
confidence: 99%
“…In the Hyperseed algorithm, HD-map P acts as an autoassociative memory [56]. That is the only operation performed on the HD-map is the search for the Best Matching Vector (BMV) given some input hypervector.…”
Section: B Search Procedures In Hyperseed: Finding Best Matching Vect...mentioning
confidence: 99%
“…An architecture for memory-recall of sensor stimuli, through the use of VSA is also proposed in [126]. This has been further explored in models of autoassociative, distributed memory that can be naturally implemented by neural networks in [127], [128], [129], [130].…”
Section: A Advances In Researchmentioning
confidence: 99%