2012
DOI: 10.1016/j.compchemeng.2012.02.011
|View full text |Cite
|
Sign up to set email alerts
|

Leapfrogging and synoptic Leapfrogging: A new optimization approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…Due to the increased accuracy and complexity (i.e., number of parameters) of modern force fields, sophisticated high-dimensional optimization, multiobjective Pareto front, and uncertainty quantification (UQ) methods should play a key role in force field development. However, these methods are not tractable when molecular simulation is performed at each step of the algorithm, as this may necessitate O (10 2 to 10 6 ) simulations .…”
Section: Introductionmentioning
confidence: 99%
“…Due to the increased accuracy and complexity (i.e., number of parameters) of modern force fields, sophisticated high-dimensional optimization, multiobjective Pareto front, and uncertainty quantification (UQ) methods should play a key role in force field development. However, these methods are not tractable when molecular simulation is performed at each step of the algorithm, as this may necessitate O (10 2 to 10 6 ) simulations .…”
Section: Introductionmentioning
confidence: 99%
“…For our purpose, we have employed the Bayesian regularization minimization algorithm which is one of the popular approaches to ANN training within the framework of the MATLAB nnstart toolbox and particularly useful for noisy data (Gençay and Qi, 2001). We remark that other training methods may also be used such as Lebenberg-Marquardt (Levenberg, 1944) and nature-inspired heuristic algorithms such as particle swarm (Eberhart and Kennedy, 1995), differential evolution (Storn and Price, 1997) or leapfrogging (Rhinehart et al, 2012). However, access to gradient information (through the use of a differentiable cost-function) heavily favors the use of gradient-based optimizers.…”
Section: Ann Trainingmentioning
confidence: 99%
“…For example, the authors in [133] design a memetic algorithm for the maximum clique problem, i.e., an EA augmented with a local search. For illustration purposes, the rest of this section describes the outlines of the binary particle swarm optimization, an important instance of EA with a robust implementation, e.g., Leapfrogging [134]- [136]. A comparison of the performance of the different EAs can be found in [137].…”
Section: Randomized Solvers For Integer Problemsmentioning
confidence: 99%