For continuous constructs, the most frequently used index of interrater agreement (r wg(1))can be problematic. Typically, rwg(1) is estimated with the assumption that a uniform distribution represents no agreement. The authors review the limitations of this uniform nullr wg(1) index and discuss alternative methods for measuring interrater agreement. A new interrater agreement statistic,a wg(1),is proposed. The authors derive thea wg(1)statistic and demonstrate thatawg(1) is an analogue to Cohen’s kappa, an interrater agreement index for nominal data. A comparison is made between agreement estimates based on the uniformr wg(1)and a wg(1), and issues such as minimum sample size and practical significance levels are discussed. The authors close with recommendations regarding the use ofr wg(1)/rwg(J) when a uniform null is assumed,r wg(1)/rwg(J) indices that do not assume a uniform null,awg(1) / a wg(J)indices, and generalizability estimates of interrater agreement.
The present study assessed whether success at faking a commercially available integrity test relates to individual differences among the test takers. We administered the Reid Report, an overt integrity test, twice to a sample of college students with instructions to answer honestly on one administration and "fake good" on the other. These participants also completed a measure of general cognitive ability, the Raven Advanced Progressive Matrices. Integrity test scores were 1.3 standard deviations higher in the faking condition (p<.05). There was a weak, but significant, positive relation between general cognitive ability and faking success, calculated as the difference in scores between the honest and faked administrations of the Reid Report (r=.17, p<.05). An examination of the correlations between faking success and general cognitive ability by item type suggested that the relation is due to the items that pose hypothetical scenarios, e.g., "Should an employee be fired for stealing a few office supplies?" (r=.22, p<.05) and not the items that ask for admissions of undesirable past behaviors. e.g., "Have you ever stolen office supplies?" (r=.02, p>.05: t=2.06, p<.05) for the difference between correlations. These results suggest that general cognitive ability is indeed an individual difference relevant to success at faking an overt integrity test.
The primary purpose of this study was to test two hypotheses proposed by Bracken and McCallum (1998), authors of the Universal Nonverbal Intelligence Test (UNIT), as to how children diagnosed with ADHD would perform on the UNIT. Twenty-nine students between the ages of 5 and 17 years were administered the extended battery of the UNIT twice, with an average of 31 days between testing sessions. Paired sample t tests evaluated mean scores on various sections of the UNIT, and test-retest stability coefficients were compared with those reported in the test manual. As one hypothesis predicted, the students' scores were significantly lower on the Memory Quotient than on the Reasoning Quotient. However, an alternative hypothesis predicting lower scores on successive-processing and planning tasks than on simultaneous tasks was not supported. Additional findings were reported on the stability of the UNIT with students with ADHD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.