2022
DOI: 10.1155/2022/7437023
|View full text |Cite
|
Sign up to set email alerts
|

Tiny Machine Learning for Resource-Constrained Microcontrollers

Abstract: We use 250 billion microcontrollers daily in electronic devices that are capable of running machine learning models inside them. Unfortunately, most of these microcontrollers are highly constrained in terms of computational resources, such as memory usage or clock speed. These are exactly the same resources that play a key role in teaching and running a machine learning model with a basic computer. However, in a microcontroller environment, constrained resources make a critical difference. Therefore, a new par… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(5 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…However, general-purpose microcontrollers have very limited resources which pose a challenge to deep neural networks. [16,17] To address this challenge, we extend the concept of smart MEMS sensors further by integrating a hardware accelerator for CNNs.…”
Section: Smart Mems Sensorsmentioning
confidence: 99%
“…However, general-purpose microcontrollers have very limited resources which pose a challenge to deep neural networks. [16,17] To address this challenge, we extend the concept of smart MEMS sensors further by integrating a hardware accelerator for CNNs.…”
Section: Smart Mems Sensorsmentioning
confidence: 99%
“…MCU and TinyML are the subjects of [28], [33], [34]. In [28], the authors analyze the TinyML frameworks for integrating ML algorithms within MCUs and present a realworld case study.…”
Section: A Hardware Perspectivementioning
confidence: 99%
“…TinyML heavily depends on hardware devices to enable efficient training and inference for its applications. Based on our literature research [28], [33], [33], [34], [41], [42], in this section, we will examine processors for TinyML workloads spanning from general-purpose Central Processing Units (CPUs) to more programmable and adaptable architectures with discussions of Graphics Processing Units (GPUs), FP-GAs, and Tensor Processing Units (TPUs). By structuring the analysis along this range, we aim to illustrate the fundamental trade-offs between efficiency, programmability, and flexibility.…”
Section: Tinyml Devices and Toolsmentioning
confidence: 99%
“…This allows the surveillance team to consider the temporal evolution of a probabilistic numeric value as a reliable forecasting tool. This machine learning model is straightforward and allows its implementation in small and less powerful devices; the algorithm focusses on Tiny Machine Learning applications, to achieve the minimum of computation power and time required for analysing the data (Immonen and Hämäläinen, 2022).…”
Section: Seismic Feature Formulamentioning
confidence: 99%