2018 52nd Asilomar Conference on Signals, Systems, and Computers 2018
DOI: 10.1109/acssc.2018.8645549
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Ridge Regression with Feature Partitioning

Abstract: We develop a new distributed algorithm to solve the ridge regression problem with feature partitioning of the observation matrix. The proposed algorithm, named D-Ridge, is based on the alternating direction method of multipliers (ADMM) and estimates the parameters when the observation matrix is distributed among different agents with feature (or vertical) partitioning. We formulate the associated ridge regression problem as a distributed convex optimization problem and utilize the ADMM to obtain an iterative s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

4
3

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…Learning over Distributed Features. Gratton et al (2018) applies ADMM to solve ridge regression. Ying et al (2018) proposes a stochastic learning method via variance reduction.…”
Section: Related Workmentioning
confidence: 99%
“…Learning over Distributed Features. Gratton et al (2018) applies ADMM to solve ridge regression. Ying et al (2018) proposes a stochastic learning method via variance reduction.…”
Section: Related Workmentioning
confidence: 99%
“…In this context, each agent in the network only possesses information of a local cost function and the agents aim to collaboratively minimize the sum of the local objective functions. Such optimization problems are relevant to several applications in statistics [3]- [5], signal processing [6]- [8] and control [1], [2].…”
Section: Introductionmentioning
confidence: 99%
“…There have been several works developing algorithms for solving distributed convex optimization problems over adhoc networks. However, many existing algorithms only offer solutions for problems with smooth objective functions, see, e.g., [5], [9], [10]. Distributed optimization problems with non-smooth objectives have been considered in [1], [2], [4], [11]- [16].…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, collecting all the data in a fusion center creates a single point of failure. Therefore, it is imperative to develop algorithms that are capable of processing data spread across multiple agents [1][2][3][4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%
“…Their relatively high computational complexity has partially motivated the works in [16][17][18][19][20]. While the approach of [16] is based on the average consensus strategy, the algorithms in [17][18][19][20][21] are based on diffusion strategies and, therefore, suffer from relatively slow convergence [6]. The convergence speed of the algorithm proposed in [13] greatly depends on the network topology and dimensionality of the data.…”
Section: Introductionmentioning
confidence: 99%