After 100 years of discussion, response bias remains a controversial topic in psychological measurement. The use of bias indicators in applied assessment is predicated on the assumptions that (a) response bias suppresses or moderates the criterion-related validity of substantive psychological indicators and (b) bias indicators are capable of detecting the presence of response bias. To test these assumptions, we reviewed literature comprising investigations in which bias indicators were evaluated as suppressors or moderators of the validity of other indicators. This review yielded only 41 studies across the contexts of personality assessment, workplace variables, emotional disorders, eligibility for disability, and forensic populations. In the first two contexts, there were enough studies to conclude that support for the use of bias indicators was weak. Evidence suggesting that random or careless responding may represent a biasing influence was noted, but this conclusion was based on a small set of studies. Several possible causes for failure to support the overall hypothesis were suggested, including poor validity of bias indicators, the extreme base rate of bias, and the adequacy of the criteria. In the other settings, the yield was too small to afford viable conclusions. Although the absence of a consensus could be used to justify continued use of bias indicators in such settings, false positives have their costs, including wasted effort and adverse impact. Despite many years of research, a sufficient justification for the use of bias indicators in applied settings remains elusive.
The increased use of effect sizes in single studies and meta-analyses raises new questions about statistical inference. Choice of an effect-size index can have a substantial impact on the interpretation of findings. The authors demonstrate the issue by focusing on two popular effect-size measures, the correlation coefficient and the standardized mean difference (e.g., Cohen's d or Hedges's g), both of which can be used when one variable is dichotomous and the other is quantitative. Although the indices are often practically interchangeable, differences in sensitivity to the base rate or variance of the dichotomous variable can alter conclusions about the magnitude of an effect depending on which statistic is used. Because neither statistic is universally superior, researchers should explicitly consider the importance of base rates to formulate correct inferences and justify the selection of a primary effect-size statistic.
The evidence suggests that ADHD is associated with substantial deficits across a variety of neurocognitive domains. This is the most in-depth review of the neurocognitive functioning of people with ADHD to date.
3 NCSA 4 PNNL 1 Background Provenance is well understood in the context of art or digital libaries, where it respectively refers to the documented history of an art object, or the documentation of processes in a digital object's life cycle. Interest for provenance in the "e-science community" [12] is also growing, since provenance is perceived as a crucial component of workflow systems that can help scientists ensure reproducibility of their scientific analyses and processes [2,4]. Against this background, the International Provenance and Annotation Workshop (IPAW'06), held on May 3-5, 2006 in Chicago, involved some 50 participants interested in the issues of data provenance, process documentation, data derivation, and data annotation [7]. During a session on provenance standardization, a consensus began to emerge, whereby the provenance research community needed to understand better the capabilities of the different systems, the representations they used for provenance, their similarities, their differences, and the rationale that motivated their designs. Hence, the first Provenance Challenge [1] was born, and from the outset, the challenge was set up to be informative rather than competitive. The first Provenance Challenge was set up in order to provide a forum for the community to understand the capabilities of different provenance systems and the expressiveness of their provenance representations. Participants simulated or ran a Functional Magnetic Resonance Imaging workflow, from which they implemented and executed a pre-identified set of "provenance queries". Sixteen teams responded to the challenge, and reported their experience in a journal special issue [9]. The first Provenance Challenge was followed by the second Provenance Challenge [1], aiming at establishing inter-operability of systems, by exchanging provenance information. During discussions, the thirteen teams that responded to the second challenge found out that there was substantial agreement on a core representation of provenance. As a result, following a workshop in August 2007, in Salt Lake City, a data model was crafted by the authors and released as the Open Provenance Model (OPM v1.00) [8]. On June 19th 2008, some twenty participants attended the first OPM workshop, held after IPAW'08 [3], to discuss the OPM specification. Minutes of the workshop and recommendations [5] were published, and led to the current version (v1.01) of the Open Provenance Model [10].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.