2020
DOI: 10.1007/s11424-020-8313-y
|View full text |Cite
|
Sign up to set email alerts
|

Synchronous Parallel Block Coordinate Descent Method for Nonsmooth Convex Function Minimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…According to (15), the variable xk is a stationary point to (1) if g k = 0 [34]. Hence, g k can be regarded as a measure of the stationarity of variable xk .…”
Section: Convergence Rate For Nonconvex Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…According to (15), the variable xk is a stationary point to (1) if g k = 0 [34]. Hence, g k can be regarded as a measure of the stationarity of variable xk .…”
Section: Convergence Rate For Nonconvex Optimizationmentioning
confidence: 99%
“…A popular approach to solving problem (1) is the projected stochastic gradient descent (pSGD) [12]- [14]. In pSGD and its variants, the iteration variable is projected back onto X after taking a step in the direction of negative stochastic gradient [15]- [20]. Such algorithms are efficient when the computational cost of performing the projection is low, e.g., projecting onto a hypercube or simplex.…”
Section: Introductionmentioning
confidence: 99%
“…They established the global convergence and linear convergence rate for this method under a locally Lipschitz error bound assumption. Subsequently, Yun, Tseng and Toh [48] extended the BCGD algorithm proposed in [42] to the problem from R n to R m×n , which requires an additional convexity assumption on G. For the same type of problem, Dai and Weng [19] brought forward a synchronous parallel block coordinate descent algorithm with a randomized variant. Additionally, when G is not assumed to be convex, Hua and Yamashita [23] discussed a class of block coordinate proximal gradient algorithms based on the Bregman functions, which may be different at each iteration.…”
Section: Introductionmentioning
confidence: 99%