The System/360 Model 91 central processing unit provides internal computational performance one to two orders of magnitude greater than that of the IBM 7090 Data Processing System through a combination of advancements in machine organization, circuit design, and hardware packaging. The circuits employed will switch at speeds of less than 3 nsec, and the circuit environment is such that delay is approximately 5 usee per circuit level. Organizationally, primary emphasis is placed on (1) alleviating the disparity between storage time and circuit speed, and (2) the development of high speed floating-point arithmetic algorithms.This paper deals mainly with item (1) of the organization. A design is described which improves the ratio of storage bandwidth and access time to cycle time through the use of storage interleaving and CPU buffer registers. It is shown that history recording (the retention of complete instruction loops in the CPU) reduces the need to exercise storage, and that sophisticated employment of buffering techniques has reduced the effective access time. The system is organized so that execution hardware is separated from the instruction unit; the resulting smaller, semiautonomous "packages" improve intra-area communication.
Examinees who take credentialing tests and other types of high-stakes assessments are usually provided an opportunity to repeat the test if they are unsuccessful on initial attempts. To prevent examinees from obtaining unfair score increases by memorizing the content of specific test items, testing agencies usually assign an alternate form to repeat examinees. Given that the use of multiple forms presents both practical and psychometric challenges, it is important to determine if unwarranted score gains occur. Most research indicates that repeat examinees realize score gains when taking the same form twice; however, the research is far from conclusive, particularly within the context of credentialing. For the present investigations, two samples of repeat examinees were randomly assigned to receive either the same test form or a different, but parallel, form on the second occasion. Study 1 found score gains of about 0.79 SD units for 71 examinees who repeated a certification examination in computed tomography. Study 2 found gains of 0.48 SD units for 765 examinees who repeated a radiography certification examination. In both studies score gains for examinees receiving the parallel test were nearly indistinguishable from score gains for those who received the same test. Factors are identified that may influence the generalizability of these findings to other assessment contexts.The purpose of credentialing 1 is to assure the public that individuals who practice an occupation or profession have met certain standards (AERA/APA/NCME, 1999). Obtaining a credential often requires that an individual pass one or more examinations. Some of the more common credentialing examinations-such as those in accounting, law, medicine, nursing, psychology, and teaching-are taken by hundreds of thousands of examinees each year. Others are less well known-such as mastectomy fitting, retinal angiography, and underground storage tank installationand may test a few dozen examinees each year. Because these tests can 1 Like the Standards for Educational and Psychological Testing (AERA et al., 1999), we use the term credentialing to refer generically to both licensure and certification. Although licensure and certification have different functions, they employ similar methods to develop, administer, score, and interpret examinations.
Examinees who take high‐stakes assessments are usually given an opportunity to repeat the test if they are unsuccessful on their initial attempt. To prevent examinees from obtaining unfair score increases by memorizing the content of specific test items, testing agencies usually assign a different test form to repeat examinees. The use of multiple forms is expensive and can present psychometric challenges, particularly for low‐volume credentialing programs; thus, it is important to determine if unwarranted score gains actually occur. Prior studies provide strong evidence that the same‐form advantage is pronounced for aptitude tests. However, the sparse research within the context of achievement and credentialing testing suggests that the same‐form advantage is minimal. For the present experiment, 541 examinees who failed a national certification test were randomly assigned to receive either the same test or a different (parallel) test on their second attempt. Although the same‐form group had shorter response times on the second administration, score gains for the two groups were indistinguishable. We discuss factors that may limit the generalizability of these findings to other assessment contexts.
The American Registry of Radiologic Technologists (ARRT) conducts periodic job analysis projects to update the content and eligibility requirements for all certification examinations. In 2009, the ARRT conducted a comprehensive job analysis project to update the content specifications and clinical competency requirements for the nuclear medicine technology examination. ARRT staff and a committee of volunteer nuclear medicine technologists designed a job analysis survey that was sent to a random sample of 1,000 entry-level staff nuclear medicine technologists. Through analysis of the survey data and judgments of the committee, the project resulted in changes to the nuclear medicine technology examination task list, content specifications, and clinical competency requirements. The primary changes inspired by the project were the introduction of CT content to the examination and the expansion of the content covering cardiac procedures.Key Words: nuclear medicine technology job analysis; ARRT nuclear medicine technology exam; ARRT nuclear medicine technology clinical competency requirements J Nucl Med Technol 2010; 38:205-208 DOI: 10.2967/jnmt.110.081596 The job responsibilities of the nuclear medicine technologist are constantly evolving. New technology emerges that makes established procedures obsolete, and improvements in existing technology encourage the incorporation of new equipment and software into the workplace. The American Registry of Radiologic Technologists (ARRT) tracks these trends in the workplace by conducting periodic job analysis projects for all examinations.In 1980, the ARRT conducted its first systematic, largescale effort to document the job functions of entry-level technologists working in nuclear medicine technology (NMT) (1). Throughout the 1980s and 1990s, ARRT conducted a job analysis study every 5 y. More recently, the studies have been performed on a 6-y cycle, with an interim, smallerscale job analysis update performed every 3 y. These updates are important for professions that are constantly evolving because of advances in equipment and technology, ensuring that the content specifications and clinical competency requirements keep up with current practice. The rationale for job analysis is outlined in the Standards for Educational and Psychological Testing (2) and in the standards adopted by the National Commission for Certifying Agencies (3). A job analysis project can be summarized as a thorough, systematic study of the activities performed in the work setting. Examination content is then designed to assess the knowledge and skills necessary for competent performance of the job duties identified by the study. MATERIALS AND METHODSA job analysis study for NMT was initiated in January 2009, with a goal of updating the content specifications and clinical competency requirements for the ARRT NMT examination in 2011. The central element of this study was a large-scale survey, conducted to determine what job functions were being performed by nuclear medicine technologists across the co...
Pass rates are key assessment statistics which are calculated for nearly all high‐stakes examinations. In this article, we define the terminal, first attempt, total attempts, and repeat attempts pass rates, and discuss the uses of each statistic. We also explain why in many situations one should expect the terminal pass rate to be the highest, first attempt pass rate to be the second highest, total attempts pass rate to be the third highest, and repeat attempts pass rate to be the lowest when repeat attempts are allowed. Analyses of data from 14 credentialing programs showed that the expected relationship held for 13 out of 14 of the programs. Additional analyses of pass rates for educational programs in radiography in one state showed that the general relationship held at the state level, but only held for 6 out of 34 educational programs. It is suggested that credentialing programs need to clearly state their pass rate definitions and carefully consider how repeat examinees may influence pass rate statistics. It is also suggested that credentialing programs need to think carefully about the meaning and uses of different pass rate statistics when choosing which pass rates to report to stakeholders.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.