2020
DOI: 10.1007/978-3-030-63076-8_13
|View full text |Cite
|
Sign up to set email alerts
|

Budget-Bounded Incentives for Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 8 publications
0
15
0
Order By: Relevance
“…This line of research inspires several data valuation methods using the Shapley value (Ghorbani, Kim, and Zou 2020;Ghorbani and Zou 2019;Jia et al 2020;Wang et al 2020), the core (Yan and Procaccia 2021), influence functions (Richardson, Filos-Ratsikas, and Faltings 2020b), and volume (Xu et al 2021b). Previous works have used concepts from mechanism design to elicit truthful reporting (Chen et al 2020;Richardson, Filos-Ratsikas, and Faltings 2020a) and to incentivize sharing data and/or model parameters in federated learning (Cong et al 2020;Kang et al 2019a,b;Lyu et al 2020;Yu et al 2020;Zhan et al 2020;Xu et al 2021a). Other works have addressed data privacy (Ding et al 2021;Hu et al 2019), adversarial robustness (Hayes and Ohrimenko 2018;So, Guler, and Avestimehr 2020), communication efficiency (Ding et al 2021), and fairness in Bayesian optimization (Sim et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…This line of research inspires several data valuation methods using the Shapley value (Ghorbani, Kim, and Zou 2020;Ghorbani and Zou 2019;Jia et al 2020;Wang et al 2020), the core (Yan and Procaccia 2021), influence functions (Richardson, Filos-Ratsikas, and Faltings 2020b), and volume (Xu et al 2021b). Previous works have used concepts from mechanism design to elicit truthful reporting (Chen et al 2020;Richardson, Filos-Ratsikas, and Faltings 2020a) and to incentivize sharing data and/or model parameters in federated learning (Cong et al 2020;Kang et al 2019a,b;Lyu et al 2020;Yu et al 2020;Zhan et al 2020;Xu et al 2021a). Other works have addressed data privacy (Ding et al 2021;Hu et al 2019), adversarial robustness (Hayes and Ohrimenko 2018;So, Guler, and Avestimehr 2020), communication efficiency (Ding et al 2021), and fairness in Bayesian optimization (Sim et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…This data redundancy creates an opportunity for EDs to trick the MO and earn more rewards, i.e., EDs are enticed to free-riding attacks. To solve this problem, Richardson et al [105] designed a mechanism based on an influence metric that guarantees truthful reporting is the best strategy for EDs. Moreover, they also provide a bound on the incentive budget, i.e., the budget must be in proportion to the value of the FL model.…”
Section: B Analysis Of Fl Incentive Schemes From the Security Perspectivementioning
confidence: 99%
“…There exist a number of proposals for contribution measurement, i.e., algorithms that determine the quality of the service provided by the clients, for Federated Learning [16,1,28,33,23,31,36,29,37]. In particular, previous work established that Shapley Value, which measure the marginal loss caused by a client's sequential absence from the training, offer accurate contribution measurements.…”
Section: Introductionmentioning
confidence: 99%
“…More generally, the theoretical analysis reveals that clients with a skewed data distribution or a high data amount are not treated fairly by SVB. approach: all clients send all their model updates to the federator who in turn aggregates and computes the contribution via the marginal loss [28,33,23,31,36,29,37]. The main drawbacks of local approaches are the excessive communication overhead and the lower privacy due to directly exchanged model updates [1].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation