2018
DOI: 10.1080/01621459.2018.1429274
|View full text |Cite
|
Sign up to set email alerts
|

Communication-Efficient Distributed Statistical Inference

Abstract: We present a Communication-efficient Surrogate Likelihood (CSL) framework for solving distributed statistical inference problems. CSL provides a communication-efficient surrogate to the global likelihood that can be used for low-dimensional estimation, high-dimensional regularized estimation and Bayesian inference. For low-dimensional estimation, CSL provably improves upon naive averaging schemes and facilitates the construction of confidence intervals. For high-dimensional regularized estimation, CSL leads to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
381
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 355 publications
(382 citation statements)
references
References 18 publications
1
381
0
Order By: Relevance
“…In return, downloading the models from servers to devices in proximity provision the latter intelligence to respond to real-time events. While computing speeds are growing rapidly, wireless transmission of high-dimensional data by many devices suffers from the scarcity of radio resources and hostility of wireless channels, resulting in a communication bottleneck for fast edge learning [5], [6].…”
Section: Introductionmentioning
confidence: 99%
“…In return, downloading the models from servers to devices in proximity provision the latter intelligence to respond to real-time events. While computing speeds are growing rapidly, wireless transmission of high-dimensional data by many devices suffers from the scarcity of radio resources and hostility of wireless channels, resulting in a communication bottleneck for fast edge learning [5], [6].…”
Section: Introductionmentioning
confidence: 99%
“…This corollary shows that the quantized version of the unconstrained decentralized gradient method archives a similar upper bound as the projected gradient method in [3] (see (18)), even for N > 2. The only difference is that the log(d) factor in Equation (18) has been replaced with log(W ). The bound W does not always depend always depend on dimension d, e.g., the upper bound in Equation (10).…”
Section: B Main Result: Maintaining the Linear Convergencementioning
confidence: 99%
“…Shamir et al [37] and Zhang and Xiao [38] proposed truly communicationefficient distributed optimization algorithms which leveraged the local second-order information, though these approaches are only guaranteed to work for convex and smooth objectives. In a similar spirit, Wang et al [8], Jordan et al [9], and Ren et al [10] developed communication-efficient algorithms for sparse learning with 1 regularization. However, each of these works needs an assumption about the strong convexity of loss functions, which may limit their approaches to only a small set of real-world applications.…”
Section: B Related Workmentioning
confidence: 99%
“…III. THEORETICAL ANALYSIS Solving subproblem (3) is inspired by the approaches of Shamir et al [37], et al, Wang et al [8], and Jordan et al [9], and is designed to take advantage of both global first-order information and local higher-order information. Indeed, when ρ = 0 and L j is quadratic, (3) has the following closed form solution:…”
Section: Algorithmmentioning
confidence: 99%