2016
DOI: 10.1007/978-3-319-47075-7_52
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Neural Networks for Internet of Things: The Big-Little Approach

Abstract: Abstract. Nowadays deep neural networks are widely used to accurately classify input data. An interesting application area is the Internet of Things (IoT), where a massive amount of sensor data has to be classified. The processing power of the cloud is attractive, however the variable latency imposes a major drawback for neural networks. In order to exploit the apparent trade-off between utilizing the available though limited embedded computing power of the IoT devices at high speed/stable latency and the seem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(36 citation statements)
references
References 11 publications
0
35
0
1
Order By: Relevance
“…Fuzzy logic and artificial neural networks are important techniques in data fusion, in which the data from many sensors is combined in various ways [56,57]. This is an important function in the IoT.…”
Section: Fuzzy Systems Theory/artificial Neural Networkmentioning
confidence: 99%
“…Fuzzy logic and artificial neural networks are important techniques in data fusion, in which the data from many sensors is combined in various ways [56,57]. This is an important function in the IoT.…”
Section: Fuzzy Systems Theory/artificial Neural Networkmentioning
confidence: 99%
“…Based on a network connection, the cloud comes with variable and high latency and a limited upload bandwidth [9]. Furthermore, the external processing includes additional costs [10] and can make the data accessible to third parties. This leads to the current approach to use embedded systems for a preprocessing of the data [10], which can reduce the data size and therefore lowers the costs for server and cloud capacity.…”
Section: Discussionmentioning
confidence: 99%
“…In [10] a so-called Big-Little neural network architecture is presented. The little neural network only classifies a subset of the output classes and can be executed locally with limited processing performance on an embedded system.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Generally, in PULP-based processors, the cluster domain is used for applications where a large amount of computations is required, while the FC is usually engaged in I/O handling or other kinds of scheduling. However, for an application scenario where, for example, a small network is used to detect the onset and, once the onset is detected, a deeper network is used for classification [44], both domains (SoC and Cluster) have its own advantage: the FC continuously reads the sensory data and executes the onset detection algorithm, while the cluster domain is activated once the onset is detected to perform the classification with a deep NN. In this case, our proposed framework stores the small network into the private L2 memory for the FC, while for the classification task, the DMA unit transfers the network using the double-buffering technique into the L1 memory for the computation in cluster domain.…”
Section: B the Fann-on-mcu Deployment Frameworkmentioning
confidence: 99%