2016
DOI: 10.1063/1.4952194
|View full text |Cite
|
Sign up to set email alerts
|

Solving global optimization problems on GPU cluster

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 20 publications
0
16
0
Order By: Relevance
“…where ε max is large enough (in our experiments ε max was set equal to 20). The function Er(ε) can be multiextremal, non-differentiable and hard to evaluate even at one value of ε, since each its computation requires to reconstruct the interpolant (1). It is supposed that Er(ε) satisfies the Lipschitz condition over the interval [0, ε max ]:…”
Section: Statement Of the Optimization Problemmentioning
confidence: 99%
“…where ε max is large enough (in our experiments ε max was set equal to 20). The function Er(ε) can be multiextremal, non-differentiable and hard to evaluate even at one value of ε, since each its computation requires to reconstruct the interpolant (1). It is supposed that Er(ε) satisfies the Lipschitz condition over the interval [0, ε max ]:…”
Section: Statement Of the Optimization Problemmentioning
confidence: 99%
“…Additional information on such parallel computation schemes can be found in [22]. 4. Globalizer System Architecture.…”
Section: Parallel Computations For Systems With Distributed Memorymentioning
confidence: 99%
“…One of the first developments was the SYMOP multiextremal optimization system [17], which has been applied for solving many optimization problems. The ExaMin system [2], [3], [4], [16] was developed and used to investigate various parallel algorithms for solving the global optimization problems on the high-performance computational systems.…”
Section: Parallel Computations For Systems With Distributed Memorymentioning
confidence: 99%
“…-And finally, once again, nobody knows how to make efficient fully Bayesian use of a cluster of GPUs in the (n, p) = (Big, Big) setting, where such a cluster would be most needed if it could be effectively utilized. If it's sufficient in your problem to settle for MAP estimates, see Barkalov, Gergel and Lebedev (2016) for one approach to computing them on a GPU cluster.…”
Section: Many Of the Sharding Papers Concentrate On Examples In Which...mentioning
confidence: 99%