2024
DOI: 10.3934/mfc.2023023
|View full text |Cite
|
Sign up to set email alerts
|

Role of federated learning in healthcare systems: A survey

Abstract: Nowadays, machine learning affects practically every industry, but the effectiveness of these systems depends on the accessibility of training data sets. Every device now produces data, and that data can serve as the foundation for upcoming technologies. Traditional machine learning systems need centralised data for their training, but the availability of valid and good amounts of data is not always possible due to various privacy risks. But federated learning can solve this issue [78]. In a federated learning… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 69 publications
0
2
0
Order By: Relevance
“…Figure 1 illustrates the training of a ML model within FL settings. The FL process [5]has described in algorithm 1. Numerous FL aggregated algorithms are available.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Figure 1 illustrates the training of a ML model within FL settings. The FL process [5]has described in algorithm 1. Numerous FL aggregated algorithms are available.…”
Section: Introductionmentioning
confidence: 99%
“…Few FL-aggregation algorithms are Federated Averaging (FedAvg) [6], Federated Stochastic Gradient Descent (FedSGD) [7], FedMA [8], MHAT (Model-Heterogeneous Aggregation Training) [9], FedADAGRAD, FedADAM, FedYOGI [6], Federated Mediation (FedMed) [10], and Faster adaptive FL algorithm (FAFED) [11] etc. The description of various FL-aggregation algorithms is also available in paper [5]. The main challenge while making an ML model is the use of centralized algorithms that rely on a single data source and suggest a decentralized framework for optimization for cooperative learning without explicitly exchanging raw data.…”
Section: Introductionmentioning
confidence: 99%