Who is the best chess player of all time? Chess players are often interested in this question that has never been answered authoritatively, because it requires comparison between chess players of different eras who never met across the board. In this paper, we attempt such a comparison based on the evaluation with a chess playing program of games played by the world chess champions in their championship matches. We slightly adapted the program Crafty for this purpose. Our analysis also takes into account the differences in players' styles to account for the fact that calm positional players have in their typical games less chance to commit gross tactical errors than aggressive tactical players. To this end, we designed a method to assess the difficulty of positions. Some of the results of this computer analysis might appear quite surprising. Overall, the results can be nicely interpreted by a chess expert.
In 2006, Guid and Bratko carried out a computer analysis of games played by World Chess Champions in an attempt to assess as objective as possible one aspect of the playing strength of chess players of different times. The chess program CRAFTY was used in the analysis. Given that CRAFTY's official chess rating is lower than the rating of many of the players analysed, the question arises to what degree that analysis could be trusted. In this paper, we investigate this question and other aspects of the trustworthiness of those results. Our study shows that, at least for pairs of the players whose scores differ significantly, it is not very likely that their relative rankings would change if (1) a stronger chess program was used, or (2) if the program would search more deeply, or (3) larger sets of positions were available for the analysis. Experimental results and theoretical explanations are provided to show that, in order to obtain a sensible ranking of the players according to the criterion considered, it is not necessary to use a computer that is stronger than the players themselves.
There is a growing literature looking at how men and women respond differently to competition. We contribute to this literature by studying gender differences in performance in a high-stakes and male dominated competitive environment, expert chess tournaments. Our findings show that women underperform compared to men of the same ability and that the gender composition of games drives this effect. Using within player variation in the conditionally random gender of their opponent, we find that women earn significantly worse outcomes against male opponents. We examine the mechanisms through which this effect operates by using a unique measure of within game quality of play. We find that the gender composition effect is driven by women playing worse against men, rather than by men playing better against women. The gender of the opponent does not affect a male player's quality of play. We also find that men persist longer against women before resigning. These results suggest that the gender composition of competitions affects the behavior of both men and women in ways that are detrimental to the performance of women. Lastly, we study the effect of competitive pressure and find that players' quality of play deteriorates when stakes increase, though we find no differential effect over the gender composition of games.JEL Codes: D03, J16, J24, J70, L83, M50.
Establishing heuristic-search based chess programs as appropriate tools for estimating human skill levels at chess may seem impossible due to the following issues: the programs' evaluations and decisions tend to change with the depth of search and with the program used. In this research, we provide an analysis of the differences between heuristic-search based programs in estimating chess skill. We used four different chess programs to perform analyses of large data sets of recorded human decisions, and obtained very similar rankings of skill-based performances of selected chess players using any of these programs at various levels of search. A conclusion is that, given two chess players, all the programs unanimously rank one player to be clearly stronger than the other, or all the programs assess their strengths to be similar. We also repeated our earlier analysis with the program CRAFTY of World Chess Champions with currently one of the strongest chess programs, RYBKA 3 2 , and obtained qualitatively very similar results as with CRAFTY. This speaks in favour of computer heuristic search being adequate for estimating skill levels of chess players, despite the above stated issues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.