2020
DOI: 10.3390/a13050125
|View full text |Cite
|
Sign up to set email alerts
|

Moving Deep Learning to the Edge

Abstract: Deep learning is now present in a wide range of services and applications, replacing and complementing other machine learning algorithms. Performing training and inference of deep neural networks using the cloud computing model is not viable for applications where low latency is required. Furthermore, the rapid proliferation of the Internet of Things will generate a large volume of data to be processed, which will soon overload the capacity of cloud servers. One solution is to process the data at the edge devi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
43
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(43 citation statements)
references
References 152 publications
(159 reference statements)
0
43
0
Order By: Relevance
“…Moreover, unpredictable network connections between the cloud and the device can also pose significant challenges. Thus, running the deep learning system on local devices is an important requirement in many domains and has a wide variety of applications including smart cities, self-driving cars, smart homes, medical devices and entertainment Véstias et al (2020) . Knowledge distillation allows developers to shrink down the size of deep learning models in order for them to fit into resource-limited devices having limited memory and power as illustrated in Fig.…”
Section: Applications Of Knowledge Distillationmentioning
confidence: 99%
“…Moreover, unpredictable network connections between the cloud and the device can also pose significant challenges. Thus, running the deep learning system on local devices is an important requirement in many domains and has a wide variety of applications including smart cities, self-driving cars, smart homes, medical devices and entertainment Véstias et al (2020) . Knowledge distillation allows developers to shrink down the size of deep learning models in order for them to fit into resource-limited devices having limited memory and power as illustrated in Fig.…”
Section: Applications Of Knowledge Distillationmentioning
confidence: 99%
“…The increased popularity of DL in edge caching and wireless networks is mainly due to the following reasons:  There is a huge amount of data traffic generated on the Internet. For instance, rapid advancement of IoT and social networking applications results in an exponential growth of the data traffic generated at the network edge [34], [79], [91], [137].  The technique of DL has the ability to extract accurate information from raw data obtained from devices such as IoT devices which are deployed in complex environments [130].…”
Section: ) Dl-based Edge Cachingmentioning
confidence: 99%
“…However, traditional DL schemes are cloudcentric and they require a stream of raw training data to be sent and processed in a centralized server [39], [74]. The process of sending streams of raw training data to a centralized server can result in several challenges, including slow response to real-time events in latency sensitive applications, excessive network communication resource consumption, increased network traffic, high energy consumption, and reduced privacy of training data [39], [43], [79], [87], [124], [137], [138]. Therefore, traditional DL frameworks may not be suitable in application scenarios with large-scale data that require low latency, efficiency, and scalability [87], [137].…”
Section: ) Dl-based Edge Cachingmentioning
confidence: 99%
See 1 more Smart Citation
“…The data are distributed over a large group of end devices and are commonly transferred to a central server, where the learning of the ML models can be performed using the entire dataset. This poses two problems: transferring the data may lead to high communication costs and the privacy of the users may be compromised [ 1 ]. To counter these problems, ML can be performed on-device, so that the data are kept localized on the device and are not uploaded to a central server.…”
Section: Introductionmentioning
confidence: 99%