2019 IEEE Biomedical Circuits and Systems Conference (BioCAS) 2019
DOI: 10.1109/biocas.2019.8918974
|View full text |Cite
|
Sign up to set email alerts
|

AdaptHD: Adaptive Efficient Training for Brain-Inspired Hyperdimensional Computing

Abstract: Brain-inspired Hyperdimensional (HD) computing is a promising solution for energy-efficient classification. HD emulates cognition tasks by exploiting long-size vectors instead of working with numeric values used in contemporary processors. However, the existing HD computing algorithms have lack of controllability on the training iterations which often results in slow training or divergence. In this work, we propose AdaptHD, an adaptive learning approach based on HD computing to address the HD training issues. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 41 publications
(21 citation statements)
references
References 7 publications
0
21
0
Order By: Relevance
“…A nice feature of this scheme is that it is extremely simple to implement in an on-line fashion: that is, on streaming data arriving continuously over time (Rahimi et al, 2018). It is common to fine-tune the class prototypes using a few rounds of perceptron training (Imani et al, , 2019b. Given some subsequent piece of query data x q ∈ X for which we do not know the correct label, we simply return the label of the most similar prototype:…”
Section: Hd Computingmentioning
confidence: 99%
“…A nice feature of this scheme is that it is extremely simple to implement in an on-line fashion: that is, on streaming data arriving continuously over time (Rahimi et al, 2018). It is common to fine-tune the class prototypes using a few rounds of perceptron training (Imani et al, , 2019b. Given some subsequent piece of query data x q ∈ X for which we do not know the correct label, we simply return the label of the most similar prototype:…”
Section: Hd Computingmentioning
confidence: 99%
“…Therefore, iterative learning tries to overcome the problem of single-pass learning that it can lead to the saturation of the prototype vectors of each class by data that are more common in each class and perform badly on under-represented patterns of the same class. In [12], authors tested iterative approaches with different fixed and adaptive learning rates on several datasets for speeding up learning and saving energy while keeping the same or higher accuracy as single-pass training.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Our proposed approach has a similar underlying idea as OnlineHD [23], or iterative learning [12] in that it focuses on less common patterns. However, it is different since it allows the creation of sub-classes rather than adding them multiple times to a single vector.…”
Section: Background and Related Workmentioning
confidence: 99%
See 2 more Smart Citations