2019 IEEE 37th International Conference on Computer Design (ICCD) 2019
DOI: 10.1109/iccd46524.2019.00038
|View full text |Cite
|
Sign up to set email alerts
|

Astraea: Self-Balancing Federated Learning for Improving Classification Accuracy of Mobile Deep Learning Applications

Abstract: Federated learning (FL) is a distributed deep learning method which enables multiple participants, such as mobile phones and IoT devices, to contribute a neural network model while their private training data remains in local devices. This distributed approach is promising in the edge computing system where have a large corpus of decentralized data and require high privacy. However, unlike the common training dataset, the data distribution of the edge computing system is imbalanced which will introduce biases … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
105
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 188 publications
(105 citation statements)
references
References 26 publications
0
105
0
Order By: Relevance
“…We adopt a convolutional neural network (CNN), which is a class of deep neural networks and commonly applied in the literature [8], [12], [13], [23], [24]. Specifically, we set four 3 × 3 convolution layers (32, 32, 64, 64 channels, each of which was activated by ReLU and batch normalized, and every two of which were followed by 2 × 2 max pooling), two fully connected layers (384 and 192 units with ReLU activation), and a final output layer with 10 units for training both CIFAR-10 and Fashion MNIST datasets.…”
Section: A Setupmentioning
confidence: 99%
See 4 more Smart Citations
“…We adopt a convolutional neural network (CNN), which is a class of deep neural networks and commonly applied in the literature [8], [12], [13], [23], [24]. Specifically, we set four 3 × 3 convolution layers (32, 32, 64, 64 channels, each of which was activated by ReLU and batch normalized, and every two of which were followed by 2 × 2 max pooling), two fully connected layers (384 and 192 units with ReLU activation), and a final output layer with 10 units for training both CIFAR-10 and Fashion MNIST datasets.…”
Section: A Setupmentioning
confidence: 99%
“…Besides local imbalance, global imbalance may also degrade the training accuracy of FL [24]. We now evaluate the impact of such global imbalance on the performance of the proposed CSFedAvg.…”
Section: F the Impact Of Global Imbalanced Distributionmentioning
confidence: 99%
See 3 more Smart Citations