A meta-analysis of social desirability distortion compared computer questionnaires with traditional paper-and-pencil questionnaires and face-to-face interviews in 61 studies (1967-1997; 673 effect sizes). Controlling for correlated observations, a near-zero overall effect size was obtained for computer versus paper-and-pencil questionnaires. With moderators, there was less distortion on computerized measures of social desirability responding than on the paper-and-pencil measures, especially when respondents were alone and could backtrack. There was more distortion on the computer on other scales, but distortion was small when respondents were alone, anonymous, and could backtrack. There was less distortion on computerized versions of interviews than on face-to-face interviews. Research is needed on nonlinear patterns of distortion, and on the effects of context and interface on privacy perceptions and on responses to sensitive questions.
In this study, researchers examined the effect of task engagement and experience on frequency rating accuracy with an objective measure of task frequency. By contrasting findings from the memory literature on frequency estimation with current task analysis practices, these investigators proposed that task performers and respondents with low levels of experience would generate more accurate frequency estimates compared with task observers and respondents with high levels of experience. Participants were randomly assigned to a task engagement condition (performer vs. observer) as well as to an experience condition (low vs. high). In support of the hypotheses, participants in the performer and low-experience conditions demonstrated greater accuracy in their frequency ratings. These findings were illustrated with several different measures of accuracy. Implications for task analysis research and practices are discussed.Knowledge about human work is obtained through various job analysis procedures. The information gained from a job analysis is necessary for organizations to develop procedures for personnel selection, classification, placement, training, performance appraisal, job evaluation, and job design (McCormick, Jeanneret, & Mecham, 1972). One critical component of many job analytic procedures involves the frequency with which tasks are performed (Harvey, 1991). These ratings are often used to compute a criticality index that identifies the essential tasks performed in a job. In addition to job analysis, task frequency ratings have been used by researchers in other areas such as performance appraisals (e.g.,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.