Although there is research on training program evaluation, there is little systematic research on the design, development and use of training reaction evaluations. In order to obtain meaningful information from a program evaluation, evaluation professionals must be rigorous in the design and development of all aspects of an evaluation study, including instrumentation. The purposes of the study were to classify the dimensions of information sought using reaction evaluations and to establish design criteria for developing training reaction instruments. There were three major phases of the study:(1) classification of the dimensions and questionnaire design criteria used in reaction evaluations;(2) validation of the classi ed dimensions and the questionnaire design criteria by subject matter experts; and (3) assessment of a sample of training reaction instruments currently used in training programs in US corporations. The research ndings were: eleven dimensions for reaction evaluation were identi ed and classi ed by purpose. Five overall design criteria, each consisting of several sub-criteria, were judged important in the design of reactionnaires. These include: introduction and directions; question format; question construction; questionnaire layout; and data analysis. It was concluded that a well-designed training reaction instrument integrates the proper application of design criteria with appropriate reaction dimensions. Most training reaction instruments used by US corporations consisted of questions representing only a few dimensions. The instruments varied in form and length. Few of them properly utilized the established questionnaire design criteria.
Performance improvement interventions, including training, are investments that can yield identifiable payoffs for an organization in the form of better job performance. Evaluation is vital to continuous improvement of human performance in the workplace. Without measures of effectiveness, organizations do not know whether dollars are being spent wisely and, consequently, whether to continue, modify, or improve performance interventions. There are several approaches for the evaluation of training programs. Few adequately cover the broader perspective of performance improvement. Various schemes and terms are used to describe facets for evaluating training programs. However, sometimes different terms describe the same event. At other times, quite different training evaluation activities are discussed by different authors using the same terms. The present article reviews six overall evaluation perspectives of corporate training programs: Kirkpatrick's four‐level approach; the CIRO approach; Hamblin's five‐level approach; Florida State University approach; Indiana University approach; and Phillips' five‐level approach. And four research areas for further study are recommended: overall evaluation models, causal relationships between evaluation categories, systematic research on how to evaluate the various categories, and appropriate uses of the results of evaluations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.