The issue of potential privacy leakage during centralized AI's model training has drawn intensive concern from the public. A Parallel and Distributed Computing (or PDC) scheme, termed Federated Learning (FL), has emerged as a new paradigm to cope with the privacy issue by allowing clients to perform model training locally, without the necessity to upload their personal sensitive data. In FL, the number of clients could be sufficiently large, but the bandwidth available for model distribution and re-upload is quite limited, making it sensible to only involve part of the volunteers to participate in the training process. The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness. In this paper, we will model the fairness guaranteed client selection as a Lyapunov optimization problem and then a C 2 MAB-based method is proposed for estimation of the model exchange time between each client and the server, based on which we design a fairness guaranteed algorithm termed RBCS-F for problem-solving. The regret of RBCS-F is strictly bounded by a finite constant, justifying its theoretical feasibility. Barring the theoretical results, more empirical data can be derived from our real training experiments on public datasets.
Cloud computing has gained enormous popularity by providing high availability and scalability as well as on-demand services. However, with the continuous rise of energy consumption cost, the virtualized environment of cloud data centers poses a challenge to today's power monitoring system. Software-based power monitoring is gaining prevalence since power models can work precisely by exploiting soft computing methodologies like genetic programming and swarm intelligence for model optimization. However, traditional power models barely consider virtualization and have drawbacks like high error rate, low feasibility as well as insufficient scalability. In this paper, we first analyze the power signatures of virtual machines in different configurations through experiments. Then we propose a virtual machine (VM) power model, named CAM, which is able to adapt to the reconfiguration of VMs and provide accurate power estimating under CPU-intensive workload. We also propose two training methodologies corresponding to two typical situations for model training. CAM can estimate the power of a single VM as well as a physical server hosting several heterogeneous VMs. We exploited public Linux benchmarks to evaluate CAM .The experimental results show that CAM produced very small errors in power estimating for both VMs (4.26 % on average) and the host server (0.88 % on average).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.