2017
DOI: 10.1007/s10898-017-0548-3
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of Cartesian grids to accelerate Delaunay-based derivative-free optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(15 citation statements)
references
References 13 publications
0
15
0
Order By: Relevance
“…A comparison between the minima of these two search functions is made in order to decide between further sampling (and, therefore, refining) an existing measurement, or sampling at a new point in parameter space. The method developed builds closely on the Delaunay-based Derivative-free Optimization via Global Surrogates algorithm, dubbed ∆-DOGS, proposed in [12][13][14]. Convergence of the algorithm is established in problems for which a.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…A comparison between the minima of these two search functions is made in order to decide between further sampling (and, therefore, refining) an existing measurement, or sampling at a new point in parameter space. The method developed builds closely on the Delaunay-based Derivative-free Optimization via Global Surrogates algorithm, dubbed ∆-DOGS, proposed in [12][13][14]. Convergence of the algorithm is established in problems for which a.…”
Section: Discussionmentioning
confidence: 99%
“…Algorithm 1 presents a strawman form of the ∆-DOGS(Z) algorithm. A significant refinement of this algorithm is presented as Algorithm 2 of [12], together with its proof of convergence and its implementation on model problems.…”
Section: Delaunay-based Optimization Coordinated With a Grid: ∆-Dogs(z)mentioning
confidence: 99%
See 2 more Smart Citations
“…One of the main challenges for Hough transform is tuning hyperparameters (e.g., distance resolution, angle resolution, an accumulator threshold parameter, minimum line length, and maximum line gap) in an efficient and accurate way for each image. This fine tuning can be easily be automated using blackbox optimization schemes such as [23,24]. One of the most common variants of the HLT is the Probabilistic HLT [20], in which a random, smaller subset of the original significant points is used for computation instead.…”
Section: Hough Line Transformmentioning
confidence: 99%