Proceedings of the 55th Annual Design Automation Conference 2018
DOI: 10.1145/3195970.3196060
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical hyperdimensional computing for energy efficient classification

Abstract: Brain-inspired Hyperdimensional (HD) computing emulates cognition tasks by computing with hypervectors rather than traditional numerical values. In HD, an encoder maps inputs to high dimensional vectors (hypervectors) and combines them to generate a model for each existing class. During inference, HD performs the task of reasoning by looking for similarities of the input hypervector and each pre-stored class hypervector However, there is not a unique encoding in HD which can perfectly map inputs to hypervector… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 33 publications
(26 citation statements)
references
References 14 publications
0
26
0
Order By: Relevance
“…For class hypervectors with binarized elements, Hamming distance is a inexpensive and suitable similarity metric, while class hypervectors with non-binarized elements need to use cosine for similarity check. Most existing HD computing techniques are using binarized class hypervectors in order to eliminate the costly cosine metric [17,24]. However, we observed that HD with binary model provides significantly low classification accuracy as compared to non-binary model.…”
Section: Associative Memory Modulementioning
confidence: 80%
See 1 more Smart Citation
“…For class hypervectors with binarized elements, Hamming distance is a inexpensive and suitable similarity metric, while class hypervectors with non-binarized elements need to use cosine for similarity check. Most existing HD computing techniques are using binarized class hypervectors in order to eliminate the costly cosine metric [17,24]. However, we observed that HD with binary model provides significantly low classification accuracy as compared to non-binary model.…”
Section: Associative Memory Modulementioning
confidence: 80%
“…HD computing builds upon a well-defined set of operations with random HD vectors and it is extremely robust in the presence of failures. HD offers a complete computational paradigm that is easily applied to learning problems including: analogy-based reasoning [11], latent semantic analysis [12], language recognition [13,14], prediction from multimodal sensor fusion [15], speech recognition [16,17], activity recognition [18], DNA sequencing [19], and clustering [20].…”
mentioning
confidence: 99%
“…Thus, the rotated HD-level vector must be summed element-wise to the HD representation of the feature. This procedure has been previously described in [27,28] with the name Ngram-based encoder. It consists of differentiating the feature positions through the assignment of unique hypervectors that result from the permutation of the feature HD representations, as described by Equation (2).…”
Section: Encoding Data and Building The Classification Modelmentioning
confidence: 99%
“…For more details, we point the reader to [26], where the authors perform a comprehensive review of classification techniques based on HD computing. Indeed, HD computing was successfully applied in previous works on a limited set of research areas, like speech recognition [27,28], the internet of things [29][30][31][32], and two life-science related applications. These last attempts concern the detection of epileptogenic regions of the human brain from Intracranial Electroencephalography (iEEG) recordings [33] and pattern matching problems on DNA sequences for diagnostic purposes [34,35].…”
Section: Introductionmentioning
confidence: 99%
“…Imani et al [9] introduced MHD, which is a multi-encoder hierarchical classifier, facilitating HD to make the best use of multiple encoders with no increase in the classification cost. MHD comprises of two HD phase: a main phase and a decider phase.…”
Section: Literature Reviewmentioning
confidence: 99%