Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600)
DOI: 10.1109/cec.2002.1007024
|View full text |Cite
|
Sign up to set email alerts
|

Threshold selection, hypothesis tests, and DOE methods

Abstract: Threshold selection-a selection mechanism for noisy evolutionary algorithms-is put into the broader context of hypothesis testing. Theoretical results are presented and applied to a simple model of stochastic search and to a simplified elevator simulator. Design of experiments methods are used to validate the significance of the results.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(26 citation statements)
references
References 8 publications
0
26
0
Order By: Relevance
“…Topologies (NEAT) 1 The experiments in this paper use NeuroEvolution of Augmenting Topologies (NEAT) as a representative evolutionary method for RL. NEAT is an appropriate choice because of its empirical successes on difficult RL tasks like pole balancing [19], game playing [21], and robot control [20].…”
Section: Neuroevolution Of Augmentingmentioning
confidence: 99%
See 1 more Smart Citation
“…Topologies (NEAT) 1 The experiments in this paper use NeuroEvolution of Augmenting Topologies (NEAT) as a representative evolutionary method for RL. NEAT is an appropriate choice because of its empirical successes on difficult RL tasks like pole balancing [19], game playing [21], and robot control [20].…”
Section: Neuroevolution Of Augmentingmentioning
confidence: 99%
“…While previous researchers have developed statistical schemes for performing such allocations [1,18], in this paper we adopt a simple heuristic strategy to increase the performance of NEAT: we concentrate evaluations on the more promising organisms in the population because their offspring will populate the majority of the next generation. In each generation, every organism is initially evaluated for ten episodes.…”
Section: Neatmentioning
confidence: 99%
“…Furthermore, effector noise could actually speed up learning by providing a nat- 11 Increasing the EPE is an effective but not necessarily efficient way of increasing the accuracy of fitness estimates [11]. More sophisticated strategies that measure uncertainty when deciding which individuals to resample (e.g., [8,59]) may perform better. Studying such methods empirically is beyond the scope of this paper.…”
Section: Testing the Effect Of Stochasticitymentioning
confidence: 99%
“…One difficult question is how to distribute evaluation episodes among the organisms in a particular generation, given a noisy fitness function. While previous researchers have developed statistical schemes for performing such allocations [8,59], in this paper we adopt a simple heuristic strategy to increase the performance of NEAT: we concentrate evaluations on the more promising organisms in the population because their offspring will populate the majority of the next generation. In each generation, we conduct 6,000 evaluations.…”
Section: Applying Neat To Keepawaymentioning
confidence: 99%
“…For example, Stagge [18] introduces mechanisms for deciding which individuals need more evaluations, assuming the noise is Gaussian. Beielstein and Markon [2] use a similar approach to develop tests for determining which individuals should survive. However, this area of research has a significantly different focus, since the goal is to find the best individuals using the fewest evaluations, not to maximize the reward accrued during those evaluations.…”
Section: Related and Future Workmentioning
confidence: 99%