2017 IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) 2017
DOI: 10.1109/spawc.2017.8227645
|View full text |Cite
|
Sign up to set email alerts
|

Distributed nonlinear regression using in-network processing with multiple Gaussian kernels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…Motivated by the success of multiple kernel-based learnings [10], [12], [22], [23], our learned function in Section III will be assumed as a parameterized function in (6). We emphasize that the parameters to be optimized in the proposed algorithms are equivalent to those in OMKL [23] (e.g., { θ[t,p] , q[t,p] : p ∈ [P ]}), whereas the optimization technique (or learning algorithm) in Section III is completely different from that in OMKL.…”
Section: Preliminariesmentioning
confidence: 99%
See 3 more Smart Citations
“…Motivated by the success of multiple kernel-based learnings [10], [12], [22], [23], our learned function in Section III will be assumed as a parameterized function in (6). We emphasize that the parameters to be optimized in the proposed algorithms are equivalent to those in OMKL [23] (e.g., { θ[t,p] , q[t,p] : p ∈ [P ]}), whereas the optimization technique (or learning algorithm) in Section III is completely different from that in OMKL.…”
Section: Preliminariesmentioning
confidence: 99%
“…A privacy-preserving distributed learning is witnessing an unprecedented interest, in which local data is kept at distributed edge devices without centralizing the data. According to the structures of a communication network, this can be categorized into a centralized federated learning [6], [7], [8], [9] and a fully distributed learning (a.k.a., a fully decentralized federated learning) [10], [11], [12], [13].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Regarding distributed kernel-based estimation algorithms, several schemes have been derived [33]- [42]. In [33] a distributed consensus-based regression algorithm based on kernel least squares has been proposed and extended by multiple kernels in [34]. Both schemes utilize alternating direction method of multipliers (ADMM) [43] for distributed consensusbased processing.…”
Section: A Backgroundmentioning
confidence: 99%