2019
DOI: 10.1109/tac.2019.2907711
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Newton Method for Large-Scale Consensus Optimization

Abstract: In this paper, we propose a distributed Newton method for consensus optimization. Our approach outperforms state-of-the-art methods, including ADMM. The key idea is to exploit the sparsity of the dual Hessian and recast the computation of the Newton step as one of efficiently solving symmetric diagonally dominant linear equations. We validate our algorithm both theoretically and empirically. On the theory side, we demonstrate that our algorithm exhibits superlinear convergence within a neighborhood of optimali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 53 publications
(34 citation statements)
references
References 16 publications
0
34
0
Order By: Relevance
“…then (15) and in turn (13) are satisfied, which by Theorem 1.1 of [10] implies (14). Recalling that L F = L f + ρλ n , a simple rearrangement of (16) gives (10).…”
Section: A the Primal-dual Algorithmmentioning
confidence: 94%
“…then (15) and in turn (13) are satisfied, which by Theorem 1.1 of [10] implies (14). Recalling that L F = L f + ρλ n , a simple rearrangement of (16) gives (10).…”
Section: A the Primal-dual Algorithmmentioning
confidence: 94%
“…., x n ] ∈ R n , and Ax = 0 represents all equality constraints. Some choices for matrix A include edge-node incidence matrix [4], weighted incidence matrix [45], graph Laplacian matrix [41], and weighted Laplacian matrix [23,1]. In this paper, we choose matrix A to be the edge-node incidence matrix of the network graph, i.e., A ∈ R ×n , = |E|, whose null space is spanned by the vector of all ones.…”
Section: Our Contributionsmentioning
confidence: 99%
“…The authors in [9] proposed a distributed primal BFGS algorithm which converges to the exact solution with a linear rate under strong convexity assumption. The authors in [41] proposed a primal-dual algorithm, which minimizes the augmented Lagrangian in the primal space and uses approximate Newton step to update the dual variable. The iterates of this algorithm go through a quadratic phase of convergence and converge to the exact solution.…”
mentioning
confidence: 99%
“…. ; x n ) = n i=1 f i (x i ), and define matrix L ∈ R n×n as the Laplacian matrix of the graph G. It can be easily verified (see [17]) that the constraint x 1 = · · · = x n is equivalent to Lx = 0 where L = L ⊗ I p ∈ R np × R np is the Kronecker product of the Laplacian matrix L and the identity matrix I p . By incorporating these definitions Problem (2) can be written as…”
Section: Problem Formulationmentioning
confidence: 99%
“…This paper is organized as follows. Section II recalls the problem of distributed optimization over networks and presents its formulation in terms of a set of equality constraints related to the Laplacian of the network as in [17]. Section III describes the dual formulation of the distributed optimization problem and some basic properties of the dual problem.…”
Section: Introductionmentioning
confidence: 99%