We assessed the suitability of six applied tests of cognitive functioning to provide a single marker for dose-related alcohol intoxication. Numerous studies have demonstrated that alcohol has a deleterious effect on specific areas of cognitive processing but few have compared the effects of alcohol across a wide range of different cognitive processes. Adult participants (N = 56, 32 males, 24 females aged 18–45 years) were randomized to control or alcohol treatments within a mixed design experiment involving multiple-dosages at approximately one hour intervals (attained mean blood alcohol concentrations (BACs) of 0.00, 0.048, 0.082 and 0.10%), employing a battery of six psychometric tests; the Useful Field of View test (UFOV; processing speed together with directed attention); the Self-Ordered Pointing Task (SOPT; working memory); Inspection Time (IT; speed of processing independent from motor responding); the Traveling Salesperson Problem (TSP; strategic optimization); the Sustained Attention to Response Task (SART; vigilance, response inhibition and psychomotor function); and the Trail-Making Test (TMT; cognitive flexibility and psychomotor function). Results demonstrated that impairment is not uniform across different domains of cognitive processing and that both the size of the alcohol effect and the magnitude of effect change across different dose levels are quantitatively different for different cognitive processes. Only IT met the criteria for a marker for wide-spread application: reliable dose-related decline in a basic process as a function of rising BAC level and easy to use non-invasive task properties.
We investigated the properties of the distribution of human solution times for Traveling Salesperson Problems (TSPs) with increasing numbers of nodes. New experimental data are presented that measure solution times for carefully chosen representative problems with 10, 20,. .. 120 nodes. We compared the solution times predicted by the convex hull procedure proposed by MacGregor and Ormerod (1996), the hierarchical approach of Graham, Joshi, and Pizlo (2000), and by five algorithms drawn from the artificial intelligence and operations research literature. The most likely polynomial model for describing the relationship between mean solution time and the size of a TSP is linear or near-linear over the range of problem sizes tested, supporting the earlier finding of Graham et al. (2000). We argue the properties of the solution time distributions place strong constraints on the development of detailed models of human performance for TSPs, and provide some evaluation of previously proposed models in light of our findings.
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ''sampling'' assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak sampling. In strong sampling, data are assumed to have been deliberately generated as positive examples of a concept, whereas in weak sampling, data are assumed to have been generated without any restrictions. We develop a more general account of sampling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one-dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak sampling, but that there are large individual differences in the relative emphasis different people give to each type of sampling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak sampling, and possible extensions of our modeling approach to richer problems of inductive generalization.
The planar Euclidean version of the traveling salesperson problem requires finding the shortest tour through a two-dimensional array of points. MacGregor and Ormerod (1996) have suggested that people solve such problems by using a global-to-local perceptual organizing process based on the convex hull of the array. We review evidence for and against this idea, before considering an alternative, localto-global perceptual process, based on the rapid automatic identification of nearest neighbors. We compare these approaches in an experiment in which the effects of number of convex hull points and number of potential intersections on solution performance are measured. Performance worsened with more points on the convex hull and with fewer potential intersections. A measure of response uncertainty was unaffected by the number of convex hull points but increased with fewer potential intersections. We discuss a possible interpretation of these results in terms of a hierarchicalsolution process based on linking nearest neighbor clusters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.