2016
DOI: 10.1109/tpds.2016.2575822
|View full text |Cite
|
Sign up to set email alerts
|

A multi-core CPU and many-core GPU based fast parallel shuffled complex evolution global optimization approach

Abstract: In the field of hydrological modelling, the global and automatic parameter 1 calibration has been a hot issue for many years. Among automatic parameter 2 optimization algorithms, the shuffled complex evolution developed at the University 3 of Arizona (SCE-UA) is the most successful method for stably and robustly locating 4 the global "best" parameter values. Ever since the invention of the SCE-UA, the 5 profession suddenly has a consistent way to calibrate watershed models. However, the 6 computational efficie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
9
1

Relationship

3
7

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 27 publications
0
16
0
Order By: Relevance
“…The successful application of OpenACC (Kan, Zhang, et al 2016;Kan, Lei, et al 2017;Kan et al 2018;Kan, He, Ding, et al 27 / 31 2017) in CLTx leads to great improvement in computational efficiency for studying the MHD instability in Tokamak. The migration of CLTx from the CPU-MPI platform to GPU-OpenACC platform is relatively easy compared with completely rewriting the code into CUDA or OpenCL.…”
Section: Discussionmentioning
confidence: 99%
“…The successful application of OpenACC (Kan, Zhang, et al 2016;Kan, Lei, et al 2017;Kan et al 2018;Kan, He, Ding, et al 27 / 31 2017) in CLTx leads to great improvement in computational efficiency for studying the MHD instability in Tokamak. The migration of CLTx from the CPU-MPI platform to GPU-OpenACC platform is relatively easy compared with completely rewriting the code into CUDA or OpenCL.…”
Section: Discussionmentioning
confidence: 99%
“…The final simulated output is the sum of the estimated output and the estimated output error. The structure of the PEK approximator [31][32][33][34][35][36][37][38][39][40] is shown in Fig. 2, where n1, n2, …, nc denote the number of candidate input variables for Class 1, 2, …, nc, respectively; weight 1, weight 2, …, weight n denote combination weights for component networks; O(s), O(e), and E(e) denote simulated output, estimated output, and estimated output error, respectively.…”
Section: Pek-based Machine Learning Methodsmentioning
confidence: 99%
“…The model parameters are optimized using the SCE-UA optimization method developed by Duan [35][36][37][38][39][40][41][42][43][44]. The objective function of the parameter optimization is the Nash-Sutcliffe coefficient of efficiency (NSCE).…”
Section: Model Calibrationmentioning
confidence: 99%