2022
DOI: 10.3390/app12073701
|View full text |Cite
|
Sign up to set email alerts
|

FLaMAS: Federated Learning Based on a SPADE MAS

Abstract: In recent years federated learning has emerged as a new paradigm for training machine learning models oriented to distributed systems. The main idea is that each node of a distributed system independently trains a model and shares only model parameters, such as weights, and does not share the training data set, which favors aspects such as security and privacy. Subsequently, and in a centralized way, a collective model is built that gathers all the information provided by all of the participating nodes. Severa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…Based on these parameters, it is possible to identify the tool that best suits the user's needs, finding the right compromise between ease of use and efficiency. Given their inherent distributed nature, ABMs lend themselves well to analyzing phenomena from the ever more attractive distributed computing domains such as federated learning [121][122][123][124] and blockchain systems [125][126][127][128]. An interesting perspective worth exploring in future work could be assessing the current ABM tool landscape's availability and suitability in such domains to inform possible feature development in this context.…”
Section: Discussionmentioning
confidence: 99%
“…Based on these parameters, it is possible to identify the tool that best suits the user's needs, finding the right compromise between ease of use and efficiency. Given their inherent distributed nature, ABMs lend themselves well to analyzing phenomena from the ever more attractive distributed computing domains such as federated learning [121][122][123][124] and blockchain systems [125][126][127][128]. An interesting perspective worth exploring in future work could be assessing the current ABM tool landscape's availability and suitability in such domains to inform possible feature development in this context.…”
Section: Discussionmentioning
confidence: 99%
“…where D = c D c represents for the whole training dataset over devices subset C and |D| = ∑ |C| c=1 |D c | denotes the total number of the data samples. To solve the above-distributed optimization problem, an incomplete list of studies have offered their solutions [2,[20][21][22][23][24][25][26]. In [2], the FederatedAveraging (FedAvg) is first advocated to combine local stochastic gradient descent (SGD) on each device with a server that performs model averaging.…”
Section: Federated Learningmentioning
confidence: 99%
“…In [24], a layers-wise Federated Matched Averaging (FedMA) is proposed for convolutional neural networks (CNNs) and long-short-term memory (LSTM) to address the data heterogeneity. In [25], the authors propose Federated Learning Based on a SPADE MAS (FLaMAS), which designs a multi-agent system to enable flexibility and dynamism in FL. In [26], a Federated Learning-Based Graph Convolutional Network (FedGCN) is proposed to process non-Euclidean data.…”
Section: Federated Learningmentioning
confidence: 99%
“…In this way, a centralized collective model is constructed from the contributions of all participating nodes. Similar to our perspective, Rincon et al [18] address this issue by introducing an MAS that forms a flexible and dynamic federated learning framework, allowing for the easy addition of nodes to the system, namely FlaMAS (Federated Learning Based on a SPADE MAS). This proposal has been experimented on in the SPADE platform with the well-known MNIST dataset.…”
Section: Introductionmentioning
confidence: 99%