2005
DOI: 10.1016/j.tcs.2005.09.047
|View full text |Cite
|
Sign up to set email alerts
|

Tuning evaluation functions by maximizing concordance

Abstract: Heuristic search effectiveness depends directly upon the quality of heuristic evaluations of states in a search space. Given the large amount of research effort devoted to computer chess throughout the past half-century, insufficient attention has been paid to the issue of determining if a proposed change to an evaluation function is beneficial. We argue that the mapping of an evaluation function from chess positions to heuristic values is of ordinal, but not interval, scale. We identify a robust metric suitab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2007
2007
2017
2017

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 48 publications
0
11
0
Order By: Relevance
“…Except for not using the desired moves, Buro's method has properties that are similar to those listed in Table 1; his objective function has continuity as well as an assured local minimum, and his method is scalable. Gomboc, Buro, and Marsland (2005) proposed to learn from game records annotated by human experts; however, the feature weights that were adjusted in their experiments were only a small part of the full evaluation functions. Reinforcement learning (Sutton & Barto, 1998), especially temporal difference learning, of which a famous success is Backgammon (Tesauro, 2002), is considered to be promising way to avoid the difficulty in finding the desired values for regression.…”
Section: Other Methods Of Learning Evaluation Functionsmentioning
confidence: 99%
“…Except for not using the desired moves, Buro's method has properties that are similar to those listed in Table 1; his objective function has continuity as well as an assured local minimum, and his method is scalable. Gomboc, Buro, and Marsland (2005) proposed to learn from game records annotated by human experts; however, the feature weights that were adjusted in their experiments were only a small part of the full evaluation functions. Reinforcement learning (Sutton & Barto, 1998), especially temporal difference learning, of which a famous success is Backgammon (Tesauro, 2002), is considered to be promising way to avoid the difficulty in finding the desired values for regression.…”
Section: Other Methods Of Learning Evaluation Functionsmentioning
confidence: 99%
“…Man vs Machine games have become scarcer. There was an annual event in Bilbao called "People vs Computers", but the results in 2005 were extremely favorable to computer programs (Levy, 2005). David Levy, who was the referee of the match, even suggested that games should be played with odds and the event was apparently canceled the next year.…”
Section: Evaluation Of the Elo Strength Of The Program Usedmentioning
confidence: 99%
“…Finding such a mapping is however not a real problem for chess programmers because their problem is more to find a good ranking of the moves in a given position than an evaluation of the probability of winning the game, which has no direct practical interest. See for example (Gomboc, Buro, and Marsland, 2005) for the problem of tuning evaluation functions.…”
Section: The Experimental Settingsmentioning
confidence: 99%
“…However, the domains where such analyses can be applied are limited. Similarly, we can see how the evaluation values for each position produced by an evaluation function agree on the preferences of human players, if positions with the assessments made by human players are available [21]. The applicability of this method is limited to domains in which such assessments can be carried out.…”
Section: B Accuracy Of Game-tree Searchmentioning
confidence: 99%