ICC 2020 - 2020 IEEE International Conference on Communications (ICC) 2020
DOI: 10.1109/icc40277.2020.9148872
|View full text |Cite
|
Sign up to set email alerts
|

Learning Centric Power Allocation for Edge Intelligence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

4
5

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…It performs dataset collection which gathers sensing data in various scenarios using dedicated vehicles, dataset annotation which labels the data manually, and model pretraining which outputs initial model parameters w [0] ∈ R W . The number of samples in each scenario is determined via learning curves [20], [21]. The cloud server will transmit w [0] to edge server via the Internet and the edge server further broadcasts w [0] n (t)} either via teacher-student distillation [22] (e.g., road-side units provide labels) or ensemble distillation [23] (e.g., global fusion provides labels).…”
Section: System Architecture and Algorithm Designmentioning
confidence: 99%
“…It performs dataset collection which gathers sensing data in various scenarios using dedicated vehicles, dataset annotation which labels the data manually, and model pretraining which outputs initial model parameters w [0] ∈ R W . The number of samples in each scenario is determined via learning curves [20], [21]. The cloud server will transmit w [0] to edge server via the Internet and the edge server further broadcasts w [0] n (t)} either via teacher-student distillation [22] (e.g., road-side units provide labels) or ensemble distillation [23] (e.g., global fusion provides labels).…”
Section: System Architecture and Algorithm Designmentioning
confidence: 99%
“…However, since the collected data samples could be imbalanced among different classes, existing learning accuracy model in [17], [18] is not applicable. To maximize the learning performance under imbalanced data, the proposed sample size model characterizes F-measure as a function of the minority class sample size.…”
Section: B Challenges Of El and Proposed El-ugvmentioning
confidence: 99%
“…To the best of the authors' knowledge, there is no exact expression of Ψ m (v m ). To this end, an inverse power law model [4], [5], [12], [13], which is supported by statistical mechanics of learning [14], is adopted to approximate Ψ m as…”
Section: System Model and Problem Formulationmentioning
confidence: 99%
“…In general, there are two ways to implement edge model training: centralized training and distributed training. Centralized training collects sensing data generated from IoT devices and trains the learning models at the edge [4], [5]. Distributed training deploys individual learning models at user terminals, and all the users upload their local model parameters periodically to the edge for model aggregation and broadcasting [6].…”
Section: Introductionmentioning
confidence: 99%