Examining other emergency response teams' methods of adaptation, communication, problem solving, trust building, and developing shared knowledge among team members can improve cybersecurity incident response teams' performance.
Researchers are frequently concerned that people respond to questions on sensitive topics (e.g., those involving money, criminal activity, sexual behavior) in a way that makes them look more socially desirable than they are. For decades, the technique known as “policy capturing” (or “judgment analysis”) has been recommended as a solution to socially desirable responding (i.e., “faking good”). Surprisingly, however, until now, the extent to which policy capturing actually reduces socially desirable responding had not been tested empirically in a comprehensive manner. We examined the importance respondents assigned to several job characteristics, some of which (e.g., pay, schedule flexibility) tend to be susceptible to socially desirable responding. We compared responses obtained from policy capturing to those from four traditional self-report techniques (i.e., Likert-type, forced choice, ranking, and points distribution) across four instructional sets: instructions to respond honestly, warnings not to respond dishonestly, instructions to respond in a socially desirable manner, and no specific instructions. Results from both between-subject and within-subject comparisons indicated that policy capturing was indeed much more resistant than any of the traditional self-report techniques to socially desirable responding.
Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.