2021
DOI: 10.1613/jair.1.12664
|View full text |Cite
|
Sign up to set email alerts
|

A Theoretical Perspective on Hyperdimensional Computing

Abstract: Hyperdimensional (HD) computing is a set of neurally inspired methods for obtaining highdimensional, low-precision, distributed representations of data. These representations can be combined with simple, neurally plausible algorithms to effect a variety of information processing tasks. HD computing has recently garnered significant interest from the computer hardware community as an energy-efficient, low-latency, and noise-robust tool for solving learning problems. In this review, we present a unified treatmen… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
42
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 54 publications
(44 citation statements)
references
References 94 publications
2
42
0
Order By: Relevance
“…HD computing was used as a classifier in a supervised manner. This approach gives the proposed framework the advantage of being adaptive to face variations due to the pre-processing stage while also being computationally simple with the online stage of incremental learning as well as resistance to noise [22] due to the nature of HD computing as further explained in section III-D.…”
Section: Hybridmentioning
confidence: 99%
“…HD computing was used as a classifier in a supervised manner. This approach gives the proposed framework the advantage of being adaptive to face variations due to the pre-processing stage while also being computationally simple with the online stage of incremental learning as well as resistance to noise [22] due to the nature of HD computing as further explained in section III-D.…”
Section: Hybridmentioning
confidence: 99%
“…Understanding HDC from a theoretical perspective is currently limited. Thomas et al (2020) presented some theoretical foundations of HDC, introducing the benefit of high-dimensional vectors, hypervector encoding, and the connection between HDC and kernel approximation.…”
Section: Related Workmentioning
confidence: 99%
“…Much of HDC's ability to learn comes from the fact that the very high dimension of the H -space allows combining information with these operations while preserving the information of the operands with high probability, due to the existence of a huge number of quasi-orthogonal vectors in the space. For a more theoretical analysis of these properties see the work by Thomas et al [39].…”
Section: Operationsmentioning
confidence: 99%
“…First the approximate label hypervector is obtained by binding the model with the encoded sample M ⊗ 𝜙 ( x) ≈ 𝜙 ℓ (ℓ ( x)) which exploits the self-inverse property of binding. The remaining terms add noise, making it approximately equal [19,39]. The precise label hypervector is then the most similar label hypervector 𝐿 𝑙 , where:…”
Section: Regressionmentioning
confidence: 99%