Data were simulated to conform to covariance patterns taken from the personnel selection literature. Two, six, and ten percent of the values were deleted from one of three predictor variables in sample sizes of 50, 100 and 200. Incomplete data matrices were treated by four methods: (a) elimination of cases with incomplete data records; (b) substitution of missing values with the variable mean; (c) replacement of missing values with an estimate obtained from simple regression; and (d) replacement of missing values with an estimate derived from iterated multiple regression. The treated data matrices were subjected to multiple regression analyses, and the resulting regression equations were compared to the equations obtained from the original, complete data. The two regression based estimation procedures provided the most accurate regression equations, followed by the method of inserting means. Discarding cases with incomplete records was the least accurate method. Although the results supported the practice of using covariate information to estimate missing data, the increases in accuracy were minimal under the conditions investigated.
Although research conducted in applied settings is frequently hindered by missing data, there is surprisingly little practical advice concerning effective methods for dealing with the problem. The purpose of this article is to describe several alternative methodsfor dealing with incomplete multivariate data and to examine the effectiveness of these methods. It is concluded that pairwise deletion and listwise deletion are among the least effective methods in terms of approximating the results that would have been obtained had the data been complete, whereas replacing missing values with estimates based on correlationalprocedures generally produces the most accurate results. In addition, some descriptive statistical procedures are recommended that permit researchers to investigate the causes and consequences of incomplete data more fully.
Examinees who take credentialing tests and other types of high-stakes assessments are usually provided an opportunity to repeat the test if they are unsuccessful on initial attempts. To prevent examinees from obtaining unfair score increases by memorizing the content of specific test items, testing agencies usually assign an alternate form to repeat examinees. Given that the use of multiple forms presents both practical and psychometric challenges, it is important to determine if unwarranted score gains occur. Most research indicates that repeat examinees realize score gains when taking the same form twice; however, the research is far from conclusive, particularly within the context of credentialing. For the present investigations, two samples of repeat examinees were randomly assigned to receive either the same test form or a different, but parallel, form on the second occasion. Study 1 found score gains of about 0.79 SD units for 71 examinees who repeated a certification examination in computed tomography. Study 2 found gains of 0.48 SD units for 765 examinees who repeated a radiography certification examination. In both studies score gains for examinees receiving the parallel test were nearly indistinguishable from score gains for those who received the same test. Factors are identified that may influence the generalizability of these findings to other assessment contexts.The purpose of credentialing 1 is to assure the public that individuals who practice an occupation or profession have met certain standards (AERA/APA/NCME, 1999). Obtaining a credential often requires that an individual pass one or more examinations. Some of the more common credentialing examinations-such as those in accounting, law, medicine, nursing, psychology, and teaching-are taken by hundreds of thousands of examinees each year. Others are less well known-such as mastectomy fitting, retinal angiography, and underground storage tank installationand may test a few dozen examinees each year. Because these tests can 1 Like the Standards for Educational and Psychological Testing (AERA et al., 1999), we use the term credentialing to refer generically to both licensure and certification. Although licensure and certification have different functions, they employ similar methods to develop, administer, score, and interpret examinations.
Practice analysis (i.e., job analysis) serves as the cornerstone for the development of credentialing examinations and is generally used as the primary source of evidence when validating scores on such exams. Numerous methodological questions arise when planning and conducting a practice analysis, but there is little consensus in the measurement community regarding the answers to these questions. This article offers recommendations concerning the following issues: selecting a method of practice analysis; developing rating scales to describe practice; determining the content of test plans; using multivariate procedures for structuring test plans; and determining topic weights for test plans. The article closes by suggesting several references for further reading.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.