2021
DOI: 10.1038/s41467-021-22364-0
|View full text |Cite
|
Sign up to set email alerts
|

Robust high-dimensional memory-augmented neural networks

Abstract: Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues. Access to this explicit memory, however, occurs via soft read and write operations involving every individual memory entry, resulting in a bottleneck when implemented using the conventional von Neumann computer a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
70
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 81 publications
(72 citation statements)
references
References 30 publications
1
70
0
1
Order By: Relevance
“…On a more technical side, a very active area of research are memory-augmented neural networks (MANNs), in which a deep neural network is connected to an associative memory for fast and lifelong learning. Computing with HD vectors can reduce the complexity of MANNs by computing with binary vectors ( 80 ). This recently proposed method reduced the number of parameters by replacing the fully connected layer of a convolutional neural network with a binary associative memory for EEG-based motor imagery brain–machine interfaces ( 81 ).…”
Section: Discussionmentioning
confidence: 99%
“…On a more technical side, a very active area of research are memory-augmented neural networks (MANNs), in which a deep neural network is connected to an associative memory for fast and lifelong learning. Computing with HD vectors can reduce the complexity of MANNs by computing with binary vectors ( 80 ). This recently proposed method reduced the number of parameters by replacing the fully connected layer of a convolutional neural network with a binary associative memory for EEG-based motor imagery brain–machine interfaces ( 81 ).…”
Section: Discussionmentioning
confidence: 99%
“…Many other types of classifiers such as k-Nearest Neighbors [Karunaratne et al, 2021b] are also likely to work with HVs as their input. As mentioned in the previous section, linear classifiers can be used since HVs are formed with a nonlinear transformation (in case of [Karunaratne et al, 2021b] with an HDC/VSA-guided convolutional neural network feature extractor). For example, the ridge regression, which is commonly used for randomized neural networks [Scardapane and Wang, 2017], performed well with HVs [Rosato et al, 2021].…”
Section: Classificationmentioning
confidence: 99%
“…Some neural networks already produce binary vectors (see [Mitrokhin et al, 2020]), and the transformation to HVs was in randomly repeated those components to get the necessary dimensionality. In [Karunaratne et al, 2021b], the authors first guided a convolutional neural network to produce HDC/VSA-conforming vectors with the aid of proper attention and sharpening functions, and then just used the sign function to transform those real-valued vectors to bipolar HVs (of the same dimensionality). A rather general way to transform format and/or dimensionality is using RP, possibly with the subsequent binarization by thresholding [Hersche et al, 2020a], .…”
Section: The Use Of Neural Network For Producing Hvsmentioning
confidence: 99%
See 1 more Smart Citation
“…So far, they have been applied in various fields including medical diagnosis (Widdows and Cohen 2015), image feature aggregation , semantic image retrieval , robotics (Neubert et al 2019b), to address catastrophic forgetting in deep neural networks (Cheung et al 2019), fault detection , analogy mapping (Rachkovskij and Slipchenko 2012), reinforcement learning , long-short term memory (Danihelka et al 2016), pattern recognition ), text classification (Joshi et al 2017), synthesis of finite state automata (Osipov et al 2017), and for creating hyperdimensional stack machines (Yerxa et al 2018). Interestingly, also the intermediate and output layers of deep artificial neural networks can provide high-dimensional vector embeddings for symbolic processing with a VSA (Neubert et al 2019b;Yilmaz 2015;Karunaratne et al 2021). Although processing of vectors with thousands of dimensions is currently not very time efficient on standard CPUs, typically, VSA operations can be highly parallelized.…”
Section: Introductionmentioning
confidence: 99%