2019 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2019
DOI: 10.23919/date.2019.8714821
|View full text |Cite
|
Sign up to set email alerts
|

A Binary Learning Framework for Hyperdimensional Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
38
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 58 publications
(39 citation statements)
references
References 19 publications
1
38
0
Order By: Relevance
“…We want to highlight that our experiments have not been performed in parallel. However, the proposed approach can be easily run in parallel on both CPU and GPU architectures due to the nature of the HD paradigm [39,40]. This would drastically speed up the procedure for building the classification model and the prediction of a class for new observations.…”
Section: Discussion Conclusion and Future Directionsmentioning
confidence: 99%
“…We want to highlight that our experiments have not been performed in parallel. However, the proposed approach can be easily run in parallel on both CPU and GPU architectures due to the nature of the HD paradigm [39,40]. This would drastically speed up the procedure for building the classification model and the prediction of a class for new observations.…”
Section: Discussion Conclusion and Future Directionsmentioning
confidence: 99%
“…Prior work has proposed various algorithmic and hardware innovations to tackle the computational challenges of HD. Acceleration in hardware has typically focused on FPGAs [16][17][18] or ASIC-ish accelerators [19,20]. FPGA-based implementations provide high parallelism and bit-level granularity of operations that significantly improves the effective utilization of resources and performance.…”
Section: Motivationmentioning
confidence: 99%
“…This flexibility is crucial since learning applications are heterogeneous in practice. Therefore, we here focus on an FPGA based implementation but emphasize our techniques are generic and can be integrated with ASIC- [19] and processor-based [20] implementations. As noted in the preceding section, the element-wise sum is a critical operation in the encoding pipeline.…”
Section: Motivationmentioning
confidence: 99%
“…Input data can have different representations, thus there are different encoding modules to map data to high dimensional space. For example, work in [16,22] proposed encoding methods to map feature vectors to high dimensional space. Work in [23] encodes text-like data using the idea of random indexing.…”
Section: Encoding Modulementioning
confidence: 99%