1998
DOI: 10.1002/(sici)1099-1506(199801/02)5:1<11::aid-nla123>3.0.co;2-f
|View full text |Cite
|
Sign up to set email alerts
|

A parallel multisplitting solution of the least squares problem

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
5
0

Year Published

2004
2004
2019
2019

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(6 citation statements)
references
References 13 publications
1
5
0
Order By: Relevance
“…A selection strategy, denoted by "Sel" in the results indicating Selection, is also tested in which at each step the update chosen is that which gives the greatest reduction in the objective function when chosen from the local solutions Y i , the BJ update z (k) and the convex update given by (2.13). This is consistent with the approach presented for the convex cases in [17], [15], and permits evaluation of whether the optimal update is necessary for obtaining a good solution.…”
Section: Evaluation Of the Pvdtls Algorithmssupporting
confidence: 81%
See 1 more Smart Citation
“…A selection strategy, denoted by "Sel" in the results indicating Selection, is also tested in which at each step the update chosen is that which gives the greatest reduction in the objective function when chosen from the local solutions Y i , the BJ update z (k) and the convex update given by (2.13). This is consistent with the approach presented for the convex cases in [17], [15], and permits evaluation of whether the optimal update is necessary for obtaining a good solution.…”
Section: Evaluation Of the Pvdtls Algorithmssupporting
confidence: 81%
“…This approach is suitable for both sparse and dense data structures. The specialization of PVD to the linear least squares problem was presented in [15], [16] and a more general approach was presented in [17].…”
Section: Introductionmentioning
confidence: 99%
“…Another method of acceleration is to combine the iterative algorithms with parallel computing schemes. One example would be partitioning the projection data, pixels, or system matrix, relating the former two, and then operating on subsets, which would allow parallel computation (Renaut 1998, Jiang et al 2013, Lu and Xiao 2015, Gao and Blumensath 2018. Decomposing the optimization problem into several sub-problems is also an effective way to enable parallelization, such as what occurs in the alternating direction method of multipliers (ADMM) for convex problems (Boyd et al 2011, Parikh and Boyd 2014, Deng et al 2017.…”
mentioning
confidence: 99%
“…To parallelize the stationary iterative methods, space decomposition methods are employed to partition matrix A into blocks (block Jacobi or block SOR) as well as the original problem into smaller local problems (Frommer & Renaut, 1999). Renaut (1998) also proposed a parallel multi-splitting solution of the least-squares problem where the solutions to the local problems are recombined using weighting matrices.…”
Section: Distributed Tomography Algorithmsmentioning
confidence: 99%