2015
DOI: 10.1109/tsp.2015.2436357
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization

Abstract: We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of this work is a novel parallel, hybrid random/deterministic decomposition scheme wherein, at each iteration, a subset of (block) variables is updated at the same time by minimizing a convex surrogate of the original … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 64 publications
(64 citation statements)
references
References 50 publications
0
63
0
1
Order By: Relevance
“…For example, in [113], two distributed learning algorithms for training random vector functional-link (RVFL) networks through interconnected nodes were presented, where training data were distributed under a decentralized information structure. To tackle the huge-scale convex and nonconvex big data optimization problems, a novel parallel, hybrid random/deterministic decomposition scheme with the power of dictionary learning was investigated in [114]. In [87], the authors developed a low-complexity, real-time online algorithm for decomposing low-rank tensors with missing entries to deal with the incomplete streaming data, and the performance of the proposed subspace learning was also validated.…”
Section: The Latest Research Progressmentioning
confidence: 99%
“…For example, in [113], two distributed learning algorithms for training random vector functional-link (RVFL) networks through interconnected nodes were presented, where training data were distributed under a decentralized information structure. To tackle the huge-scale convex and nonconvex big data optimization problems, a novel parallel, hybrid random/deterministic decomposition scheme with the power of dictionary learning was investigated in [114]. In [87], the authors developed a low-complexity, real-time online algorithm for decomposing low-rank tensors with missing entries to deal with the incomplete streaming data, and the performance of the proposed subspace learning was also validated.…”
Section: The Latest Research Progressmentioning
confidence: 99%
“…We consider two commonly used rules to select the block variable, namely, the cyclic update rule and the random update rule. Note that both of them are well-known (see [18,19]), but we give their definitions for the sake of reference in later developments.…”
Section: The Proposed Block Successive Convex Approximation Algomentioning
confidence: 99%
“…and k p t k = 1. Any block variable can be selected with a nonzero probability, and some examples are given in [19].…”
Section: Algorithmmentioning
confidence: 99%
See 2 more Smart Citations