Proceedings of the 18th ACM International Conference on Computing Frontiers 2021
DOI: 10.1145/3457388.3458656
|View full text |Cite
|
Sign up to set email alerts
|

Ultra-compact binary neural networks for human activity recognition on RISC-V processors

Abstract: Human Activity Recognition (HAR) is a relevant inference task in many mobile applications. State-of-the-art HAR at the edge is typically achieved with lightweight machine learning models such as decision trees and Random Forests (RFs), whereas deep learning is less common due to its high computational complexity. In this work, we propose a novel implementation of HAR based on deep neural networks, and precisely on Binary Neural Networks (BNNs), targeting low-power general purpose processors with a RISC-V instr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 15 publications
(22 citation statements)
references
References 24 publications
0
21
1
Order By: Relevance
“…We train our DTs and CNNs in Python, before converting the trained models to optimized C code for the target MCU. For this conversion, we use the library described in [15] for the 1D CNN, while the DT and RF implementations are described in [1]. Both libraries are specifically tailored for the target hardware, leveraging the available SIMD and DSP-oriented instruction set extensions.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…We train our DTs and CNNs in Python, before converting the trained models to optimized C code for the target MCU. For this conversion, we use the library described in [15] for the 1D CNN, while the DT and RF implementations are described in [1]. Both libraries are specifically tailored for the target hardware, leveraging the available SIMD and DSP-oriented instruction set extensions.…”
Section: Resultsmentioning
confidence: 99%
“…Figure 4 compares our method with a fully tree-based solution, consisting either of a single DT or of a Random Forest (RF) ensemble [1]. Precisely, the green curve shows the Pareto front obtained training all RFs with a number of trees from 1 (single DT) to 15, and depths in the [2,20] interval.…”
Section: A Energy and Memory Comparisonmentioning
confidence: 99%
See 2 more Smart Citations
“…For instance, it can be noticed that EdMIPS quantizes most of the activations with 8bit, whereas our method exploits its additional flexibility to reduce the activation precision, compensating it with an increase in the bitwidth assigned to an often small subset of the weights channels (e.g., only 3% in c4), in order to obtain the same final accuracy. Eventually, only the first and last layer activations, which notoriously often require higher precision [24] remain at 8bit. Although we report a single example for sake of space, similar considerations apply to the results on the other 3 benchmarks.…”
Section: Results Analysismentioning
confidence: 99%