ESANN 2022 Proceedings 2022
DOI: 10.14428/esann/2022.es2022-98
|View full text |Cite
|
Sign up to set email alerts
|

Federated Adaptation of Reservoirs via Intrinsic Plasticity

Abstract: We propose a novel algorithm for performing federated learning with Echo State Networks (ESNs) in a client-server scenario. In particular, our proposal focuses on the adaptation of reservoirs by combining Intrinsic Plasticity with Federated Averaging. The former is a gradientbased method for adapting the reservoir's non-linearity in a local and unsupervised manner, while the latter provides the framework for learning in the federated scenario. We evaluate our approach on real-world datasets from human monitori… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…In both datasets, we chunked the sequence in sections of 700 and 200 timesteps for WESAD and HHAR respectively. Similarly to [2], we employed a clientwise training-validation-test split of the dataset, which is 9-3-3 and 5-2-2 for WESAD and HHAR respectively. We conducted our experiments by involving an incremental number of training clients, i.e., 25%, 50%, 75% and 100%.…”
Section: Experimental Assessmentmentioning
confidence: 99%
See 1 more Smart Citation
“…In both datasets, we chunked the sequence in sections of 700 and 200 timesteps for WESAD and HHAR respectively. Similarly to [2], we employed a clientwise training-validation-test split of the dataset, which is 9-3-3 and 5-2-2 for WESAD and HHAR respectively. We conducted our experiments by involving an incremental number of training clients, i.e., 25%, 50%, 75% and 100%.…”
Section: Experimental Assessmentmentioning
confidence: 99%
“…This has led to the emergence of federated learning, a decentralized approach where models are trained on data distributed across multiple devices without the need to transfer the data to a central server. In such setting, Federated Echo State Networks (ESNs) were proven to be effective thanks to their capability to efficiently handle temporal data, as well as for the low computational cost of the learning algorithms [1,2]. In this work, we aim to further improve FedRR [1], an algorithm that performs an exact computation of the global readout in a federated setting, towards communication efficiency.…”
Section: Introductionmentioning
confidence: 99%