2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) 2019
DOI: 10.1109/ipdpsw.2019.00148
|View full text |Cite
|
Sign up to set email alerts
|

Training on the Edge: The why and the how

Abstract: Edge computing is the natural progression from Cloud computing, where, instead of collecting all data and processing it centrally, like in a cloud computing environment, we distribute the computing power and try to do as much processing as possible, close to the source of the data. There are various reasons this model is being adopted quickly, including privacy, and reduced power and bandwidth requirements on the Edge nodes. While it is common to see inference being done on Edge nodes today, it is much less co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 13 publications
0
11
0
Order By: Relevance
“…However, for an input image that is large enough, or a network that is deep enough, it is seen that the input image, network weights, and network activations together require more memory than available on a single node, even for a single input image (batchsize = 1 ). We previously addressed this issue in the context of neural networks (Kukreja et al, 2019b). In this paper we address the same issue for FWI.…”
Section: Fwi and Other Similar Problemsmentioning
confidence: 96%
“…However, for an input image that is large enough, or a network that is deep enough, it is seen that the input image, network weights, and network activations together require more memory than available on a single node, even for a single input image (batchsize = 1 ). We previously addressed this issue in the context of neural networks (Kukreja et al, 2019b). In this paper we address the same issue for FWI.…”
Section: Fwi and Other Similar Problemsmentioning
confidence: 96%
“…Traditionally the training of AI models works on a principle of centralized deployment architecture which carries heavy traffic in both communication directions (Kukreja et al, 2019;Zhou et al, 2019), eventually creating overhead. In our presented IPN, MLdriven federated learning technique also known as collaborative learning is used to train the AI model with an aim to port onto edge devices.…”
Section: Edge Trainingmentioning
confidence: 99%
“…Recently, an on-device ML paradigm is gaining attention in which the edge device itself processes data [1][2][3][4]. This in turn reduces the potential security problems when data are transmitted from the edge device to the server as well as the energy consumption generated from data communication.…”
Section: Related Workmentioning
confidence: 99%