2016
DOI: 10.1137/15m1024950
|View full text |Cite
|
Sign up to set email alerts
|

ARock: An Algorithmic Framework for Asynchronous Parallel Coordinate Updates

Abstract: Finding a fixed point to a nonexpansive operator, i.e., x * = T x * , abstracts many problems in numerical linear algebra, optimization, and other areas of scientific computing. To solve fixed-point problems, we propose ARock, an algorithmic framework in which multiple agents (machines, processors, or cores) update x in an asynchronous parallel fashion. Asynchrony is crucial to parallel computing since it reduces synchronization wait, relaxes communication bottleneck, and thus speeds up computing significantly… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
309
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 213 publications
(311 citation statements)
references
References 54 publications
2
309
0
Order By: Relevance
“…Third, it is interesting to develop a primal-dual type MC-BCD, which would apply to a model-free DMDP along a single trajectory. Yet another line of work applies block coordinate update to linear and nonlinear fixed-point problems [18,17,5] because it can solve optimization problems in imaging and conic programming, which are equipped with nonsmooth, nonseparable objectives, and constraints.…”
Section: Possible Future Workmentioning
confidence: 99%
“…Third, it is interesting to develop a primal-dual type MC-BCD, which would apply to a model-free DMDP along a single trajectory. Yet another line of work applies block coordinate update to linear and nonlinear fixed-point problems [18,17,5] because it can solve optimization problems in imaging and conic programming, which are equipped with nonsmooth, nonseparable objectives, and constraints.…”
Section: Possible Future Workmentioning
confidence: 99%
“…Compared to the optimization algorithms used in these methods (Haufe et al, 2008; Chang et al, 2010; Sohrabpour et al, 2016), the proposed algorithm in this paper is more efficient and robust, and it is also able to tackle large-scale problems. Further, with this type of problem formation, it is possible to adopt some computing techniques (Peng et al, 2015) to further accelerate the algorithm, which will be the future work.…”
Section: Discussionmentioning
confidence: 99%
“…Challenges of distributed learning also lie in asynchrony and delay introduced by e.g., IoT mobility and heterogeneity. Asynchronous parallel learning schemes are thus worth investigating by leveraging advances in static optimization settings [14], [75]. From distributed machine learning to distributed control, multi-agent reinforcement learning will play a critical role in distributed control for IoT [58].…”
Section: Lessons Learned and The Road Aheadmentioning
confidence: 99%