2019
DOI: 10.1109/access.2019.2900084
|View full text |Cite
|
Sign up to set email alerts
|

Efficient FPGA Implementation of Multilayer Perceptron for Real-Time Human Activity Classification

Abstract: The smartphone-based human activity recognition (HAR) systems are not capable to deliver high-end performance for challenging applications. We propose a dedicated hardware-based HAR system for smart military wearables, which uses a multilayer perceptron (MLP) algorithm to perform activity classification. To achieve the flexible and efficient hardware design, the inherent MLP architecture with parallel computation is implemented on FPGA. The system performance has been evaluated using the UCI human activity dat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 80 publications
(42 citation statements)
references
References 36 publications
0
41
0
1
Order By: Relevance
“…Finally, for very small networks, such as the ones used in applications B and C, the runtime is far below the millisecond range. If the application scenario requires only very few classifications per cluster activation, then the IBEX core is the most energy-efficient one, with a consumption of 2.9 µJ and 0.15 µJ, respectively for applications B and C. Comparing to the work in [46] for application C, the IBEX core is 13.5× faster in computation time and 434× more energy efficient than a parallel FPGA implementation. However, if continuous classification is required, which is the case for the vast majority of the IoT applications, then the parallel execution, once again, outperforms in terms of speed and energy efficiency.…”
Section: Experimental Evaluation and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Finally, for very small networks, such as the ones used in applications B and C, the runtime is far below the millisecond range. If the application scenario requires only very few classifications per cluster activation, then the IBEX core is the most energy-efficient one, with a consumption of 2.9 µJ and 0.15 µJ, respectively for applications B and C. Comparing to the work in [46] for application C, the IBEX core is 13.5× faster in computation time and 434× more energy efficient than a parallel FPGA implementation. However, if continuous classification is required, which is the case for the vast majority of the IoT applications, then the parallel execution, once again, outperforms in terms of speed and energy efficiency.…”
Section: Experimental Evaluation and Resultsmentioning
confidence: 99%
“…MLPs have been successfully used in a wide range of application scenarios, such as disease detection [45], activity recognition [46], and brain-machine interface [47]. Many studies identified MLPs to be the best or one of the best algorithms to solve tasks in the IoT domain using wearable devices [48]- [51].…”
Section: Application Showcasesmentioning
confidence: 99%
See 1 more Smart Citation
“…Gaikwad et al [29] proposed a hardware implemented FPGA for military equipment that uses an MLP algorithm to perform classification tasks. Parallel MLP computation was implemented to reach enhanced hardware design.…”
Section: Hardware Ann Fpga Implementationmentioning
confidence: 99%
“…The development of wearable devices leads to implementation of ML algorithms directly on board [18,19], allowing for the reduction of the amount of data to be transmitted, and with consistent advantages in terms of power consumption and system usability [14]. To address the issues related to the need for platforms with good computing capacity, instead of general-purpose processors, dedicated hardware architectures such as field programmable gate arrays (FPGAs) can be selected for the implementation of the algorithms [20][21][22][23]. This allows for the control of the resources needed for the task and to optimize the system for performance or physical size, depending on the use case.…”
Section: Introductionmentioning
confidence: 99%