The measurement of subjective pain intensity continues to be important to both researchers and clinicians. Although several scales are currently used to assess the intensity construct, it remains unclear which of these provides the most precise, replicable, and predictively valid measure. Five criteria for judging intensity scales have been considered in previous research: ease of administration of scoring; relative rates of incorrect responding; sensitivity as defined by the number of available response categories; sensitivity as defined by statistical power; and the magnitude of the relationship between each scale and a linear combination of pain intensity indices. In order to judge commonly used pain intensity measures, 75 chronic pain patients were asked to rate 4 kinds of pain (present, least, most, and average) using 6 scales. The utility and validity of the scales was judged using the criteria listed above. The results indicate that, for the present sample, the scales yield similar results in terms of the number of subjects who respond correctly to them and their predictive validity. However, when considering the remaining 3 criteria, the 101-point numerical rating scale appears to be the most practical index.
The current crisis in scientific psychology about whether our findings are irreproducible was presaged years ago by Tversky and Kahneman (1971), who noted that even sophisticated researchers believe in the fallacious Law of Small Numbers-erroneous intuitions about how imprecisely sample data reflect population phenomena. Combined with the low power of most current work, this often leads to the use of misleading criteria about whether an effect has replicated. Rosenthal (1990) suggested more appropriate criteria, here labeled the continuously cumulating metaanalytic (CCMA) approach. For example, a CCMA analysis on a replication attempt that does not reach significance might nonetheless provide more, not less, evidence that the effect is real. Alternatively, measures of heterogeneity might show that two studies that differ in whether they are significant might have only trivially different effect sizes. We present a nontechnical introduction to the CCMA framework (referencing relevant software), and then explain how it can be used to address aspects of replicability or more generally to assess quantitative evidence from numerous studies. We then present some examples and simulation results using the CCMA approach that show how the combination of evidence can yield improved results over the consideration of single studies.
This document is copyrighted by the American Psychological Association or one of its allied publishers.This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.