SC20: International Conference for High Performance Computing, Networking, Storage and Analysis 2020
DOI: 10.1109/sc41405.2020.00061
|View full text |Cite
|
Sign up to set email alerts
|

Newton-ADMM: A Distributed GPU-Accelerated Optimizer for Multiclass Classification Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Xu et al (2017) [57] proposed consensus ADMM in which each agent automatically tuned the local penalty parameters in an adaptive manner. Fang et al (2020) [58] proposed a consensus ADMM approach whereby the Newton method was applied for each subproblem to improve the quality of the subproblem solutions. They [62] presented a randomized incremental primal dual method to solve the CO problem, whereby the dual variable over the connected multi-agent network in each iteration was only updated at a randomly selected node.…”
Section: Alternating Direction Methods Of Multipliersmentioning
confidence: 99%
“…Xu et al (2017) [57] proposed consensus ADMM in which each agent automatically tuned the local penalty parameters in an adaptive manner. Fang et al (2020) [58] proposed a consensus ADMM approach whereby the Newton method was applied for each subproblem to improve the quality of the subproblem solutions. They [62] presented a randomized incremental primal dual method to solve the CO problem, whereby the dual variable over the connected multi-agent network in each iteration was only updated at a randomly selected node.…”
Section: Alternating Direction Methods Of Multipliersmentioning
confidence: 99%
“…The above objective function can be minimized using first-order optimization methods or second-order methods [54,55].…”
Section: Low-dimensional Embeddings Of Datasetmentioning
confidence: 99%
“…In recent years, a number of papers have proposed parallelizable variants of numerical optimization methods such as the interior point method, 37 parallel quadratic programming, 38 ADMM 39‐41 and other proximal algorithms 42,43 . In these approaches, GPUs are used to parallelize the involved algebraic operations and the solution of linear systems: the primal‐dual optimality conditions in interior point algorithms and equality‐constrained QPs in ADMM.…”
Section: Introductionmentioning
confidence: 99%