2005
DOI: 10.1109/tevc.2005.856212
|View full text |Cite
|
Sign up to set email alerts
|

Coevolution Versus Self-Play Temporal Difference Learning for Acquiring Position Evaluation in Small-Board Go

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
65
0

Year Published

2007
2007
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(67 citation statements)
references
References 28 publications
2
65
0
Order By: Relevance
“…board size, i.e. networks trained successfully on small boards (where training is efficient) do not play well when the board is enlarged [5,3]. The present paper builds on the promising preliminary results [6,7] of a scalable approach based on Multi-dimensional Recurrent Neural Networks (MDRNNs; [8,9]) and enhances the ability of that architecture to capture long-distance dependencies.…”
Section: Introductionmentioning
confidence: 94%
See 1 more Smart Citation
“…board size, i.e. networks trained successfully on small boards (where training is efficient) do not play well when the board is enlarged [5,3]. The present paper builds on the promising preliminary results [6,7] of a scalable approach based on Multi-dimensional Recurrent Neural Networks (MDRNNs; [8,9]) and enhances the ability of that architecture to capture long-distance dependencies.…”
Section: Introductionmentioning
confidence: 94%
“…In addition, despite being described by a small set of formal rules, they often involve highly complex strategies. One of the most interesting board games is the ancient game of Go (among other reasons, because computer programs are still much weaker than human players), which can be solved for small boards [1] but is very challenging for larger ones [2,3]. Its extremely large search space defies traditional search-based methods.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, Runarsson and Lucas [56] compare evolution and TD in small-board Go and find that TD learns much faster and in most cases achieves higher performance also. However, they find at least one setup, using coevolution, wherein evolution outperforms TD.…”
Section: Related Workmentioning
confidence: 92%
“…Those that do (e.g., [21,45,49,56,80]) rarely isolate the factors critical to the performance of each method. As a result, there are currently few general guidelines describing the methods' relative strengths and weaknesses.…”
Section: Introductionmentioning
confidence: 99%
“…Fogel [1] evolved neural networks for board evaluation in chess, and Schraudolph [2] similarly optimised board evaluation functions, but for the game Go and using TD-learning; Lucas and Runarsson [3] compared both methods. Moving on to games that actually require a computer to play (computer games proper, rather than just computerised games) optimisation algorithms have been applied to many simple arcade-style games such as Pacman [4], X-pilot [5] and Cellz [6].…”
Section: Optimisationmentioning
confidence: 99%