2000
DOI: 10.1002/9781118723203
|View full text |Cite
|
Sign up to set email alerts
|

Practical Methods of Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
1,836
1
88

Year Published

2000
2000
2014
2014

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 1,935 publications
(1,931 citation statements)
references
References 0 publications
6
1,836
1
88
Order By: Relevance
“…then have a restricted step-size gradient descent problem (Fletcher & Fletcher, 2000). Thus, we have an optimization problem…”
Section: Motivationmentioning
confidence: 99%
“…then have a restricted step-size gradient descent problem (Fletcher & Fletcher, 2000). Thus, we have an optimization problem…”
Section: Motivationmentioning
confidence: 99%
“…This iterative procedure may take a relatively small computational time since the summation term in (7) is conduced only for a set of support vectors Ω due to Karush-Kuhn-Tucker dual complementarity condition [19]. Therefore, with a low cost, ATLAS adaptively determines time-step sizes and methods in accordance with the specified level of the required local convergence.…”
Section: Adaptive Time-stepping With Support Vector Machinesmentioning
confidence: 99%
“…Similar computational needs and problems with large numbers of parameters apply for the maximum likelihood method which is used for single channel measurements as in the software package QuB (Milescu et al 2005). Sequential quadratic programming methods also can be used to find a constrained minimum with superlinear convergence (Gill 1981, Fletcher 1987 for model parameters using Matlab™ (Bueno-Orovio et al 2008). Global methods like genetic algorithms, which have been used to fit conductance values (Syed et al 2005), simulated annealing (e.g.…”
Section: Parameterisation Data Availability and Local Minimamentioning
confidence: 99%