2012 IEEE Congress on Evolutionary Computation 2012
DOI: 10.1109/cec.2012.6256117
|View full text |Cite
|
Sign up to set email alerts
|

Dependency Identification technique for large scale optimization problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
20
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 11 publications
0
20
0
Order By: Relevance
“…The results have shown that the performance of CCGA-1 123 on separable problems is significantly better than a standard GA, while the performance of CCGA-1 alleviates on non-sepa-124 rable problems. The separable problems are referred to the problems with no interacting variables, i.e., the influence of each 125 variable on the fitness value is independent of any other variables whereas problems with strong interacting variables are The random grouping and adaptive weighting scheme were embedded in a cooperatively coevolving PSO Sayed et al [155] HDIMA [155] The hybrid dependency identification with the memetic algorithm Sayed et al [154] DIMA [154] The dependency identification with the memetic algorithm Sun et al [197] CPSO-SL [197 sub-problems (where n ¼ K Ã s). A concatenation of all global best particles from all K swarms is called a context vectorŷ 133 which is used to compute the fitness of a particle in a swarm.…”
Section: Q2mentioning
confidence: 99%
See 1 more Smart Citation
“…The results have shown that the performance of CCGA-1 123 on separable problems is significantly better than a standard GA, while the performance of CCGA-1 alleviates on non-sepa-124 rable problems. The separable problems are referred to the problems with no interacting variables, i.e., the influence of each 125 variable on the fitness value is independent of any other variables whereas problems with strong interacting variables are The random grouping and adaptive weighting scheme were embedded in a cooperatively coevolving PSO Sayed et al [155] HDIMA [155] The hybrid dependency identification with the memetic algorithm Sayed et al [154] DIMA [154] The dependency identification with the memetic algorithm Sun et al [197] CPSO-SL [197 sub-problems (where n ¼ K Ã s). A concatenation of all global best particles from all K swarms is called a context vectorŷ 133 which is used to compute the fitness of a particle in a swarm.…”
Section: Q2mentioning
confidence: 99%
“…Furthermore, they shown how the linkage identification method (LINCR)[169] can be 266 derived by the Theorem 1. In[155,154], the dependency identification (DI) technique for decomposing a LSGO problem 267 was proposed; it is derived from the definition of problem separability provided in[121,162]. The problem separability is…”
mentioning
confidence: 99%
“…BBOP [2], LSGO [4] or the suite (LSGO-extended) in [3] are optimization benchmarks built from synthetic fitness functions that provide opportunities to empirically study and compare the performance of optimizers [1]. The underlying assumption is that algorithms that perform statistically significant better on some benchmark functions will also exhibit this behavior on real-world problems featuring some kind of similarity with the corresponding functions.…”
Section: Introductionmentioning
confidence: 99%
“…but there hasn't been much work proposed that finds dependency among decision variables by using the memetic algorithms. Sayed et al tried to fill this gap by proposing DIMA [20]. It consists of following two stages:…”
Section: Memetic Algorithmsmentioning
confidence: 99%