Do skilled decision-makers plan further into the future than novices? This question has been investigated for almost 75 years, traditionally by studying expert players in complex board games like chess. However, the complexity of these games poses a barrier to detailed modeling of human behavior. Conversely, common planning tasks in cognitive science are often lower-complexity and impose a ceiling for the depth to which any player can plan. Here, we investigate expertise by studying decision-making in a board game which is at the limit of complexity that can be precisely modeled using state-of-the-art statistical techniques, and which has ample opportunity for skilled players to plan deeply. We find robust evidence for increased planning depth with expertise in both laboratory and large-scale naturalistic data.
Making good decisions requires thinking ahead, but the huge number of actions and outcomes one could consider makes exhaustive planning infeasible for computationally constrained agents, such as humans. How people are nevertheless able to solve novel problems when their actions have long-reaching consequences is thus a long-standing question in cognitive science. To address this question, we propose a model of resource-constrained planning that allows us to derive optimal planning strategies. We find that previously proposed heuristics such as best-first search are near-optimal under some circumstances, but not others. In a mouse-tracking paradigm, we show that people adapt their planning strategies accordingly, planning in a manner that is broadly consistent with the optimal model but not with any single heuristic model. We also find systematic deviations from the optimal model that might result from additional cognitive constraints that are yet to be uncovered.
The fate of scientific hypotheses often relies on the ability of a computational model to explain the data, quantified in modern statistical approaches by the likelihood function. The log-likelihood is the key element for parameter estimation and model evaluation. However, the log-likelihood of complex models in fields such as computational biology and neuroscience is often intractable to compute analytically or numerically. In those cases, researchers can often only estimate the log-likelihood by comparing observed data with synthetic observations generated by model simulations. Standard techniques to approximate the likelihood via simulation either use summary statistics of the data or are at risk of producing substantial biases in the estimate. Here, we explore another method, inverse binomial sampling (IBS), which can estimate the log-likelihood of an entire data set efficiently and without bias. For each observation, IBS draws samples from the simulator model until one matches the observation. The log-likelihood estimate is then a function of the number of samples drawn. The variance of this estimator is uniformly bounded, achieves the minimum variance for an unbiased estimator, and we can compute calibrated estimates of the variance. We provide theoretical arguments in favor of IBS and an empirical assessment of the method for maximum-likelihood estimation with simulation-based models. As case studies, we take three model-fitting problems of increasing complexity from computational and cognitive neuroscience. In all problems, IBS generally produces lower error in the estimated parameters and maximum log-likelihood values than alternative sampling methods with the same average number of samples. Our results demonstrate the potential of IBS as a practical, robust, and easy to implement method for log-likelihood evaluation when exact techniques are not available.
A critical aspect of human intelligence is our ability to plan, that is, to use a model of the world to simulate, evaluate, and select among hypothetical future actions. However, exhaustive planning is intractable because the number of possible action sequences increases exponentially with the number of steps that one plans ahead. Understanding how people are nevertheless able to solve novel problems when their actions have long-reaching consequences is thus critical to understanding human intelligence. Progress in answering this question has been hampered by two challenges: planning cannot be observed and we do not have a good framework for formalizing the tradeoff between performance and computational cost. In this work, we propose solutions to both challenges, based on the idea that planning can be conceptualized as information seeking. Specifically, we model planning as the selection of information-generating computations and introduce an experimental paradigm in which these computations are externalized as mouse clicks. We find that our participants' behavior is broadly consistent with the optimal information-seeking model. We also uncover systematic deviations that might result from heuristic approximations or additional cognitive constraints that have yet to be uncovered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.