We review evidence showing that multisource feedback ratings are related to other measures of leadership effectiveness and that different rater sources conceptualize performance in a similar manner. We then describe a meta-analysis of 24 longitudinal studies showing that improvement in direct report, peer, and supervisor ratings over time is generally small. We present a theoretical framework and review empirical evidence suggesting performance improvement should be more likely for some feedback recipients than others. Specifically, improvement is most likely to occur when feedback indicates that change is necessary, recipients have a positive feedback orientation, perceive a need to change their behavior, react positively to the feedback, believe change is feasible, set appropriate goals to regulate their behavior, and take actions that lead to skill and performance improvement.It has been nearly 10 years since London and Smither (1995) evaluated the state of multisource feedback practice and offered theory-based propositions for understanding how people process and use the feedback. This article assesses progress in the field, especially focusing on the extent to which feedback recipients improve their performance after receiving multisource feedback. We argue that practitioners should not expect large, widespread performance improvement after employees receive multisource feedback. Instead, we present a theoretical model that suggests some feedback recipients should be more likely to improve than others. First, we review empirical evidence concerning the validity of multisource feedback. This is important because it would make little sense to focus
We note that applicant reactions to selection procedures may be of practical importance to employers because of influences on organizations’attractiveness to candidates, ethical and legal issues, and possible effects on selection procedure validity and utility. In Study 1, after reviewing sample items or brief descriptions of 14 selection tools, newly hired entry‐level managers (n= 110) and recruiting/employment managers (n= 44) judged simulations, interviews, and cognitive tests with relatively concrete item‐types (e.g., vocabulary, standard written English, mathematical word problems) to be significantly more job related than personality, biodata, and cognitive tests with relatively abstract item‐types (e.g., quantitative comparisons, letter sets). A measure of new managers’cognitive abilities was positively correlated with their perceptions of the job relatedness of selection procedures. In Study 2, applicant reactions to a range of entry‐level to professional civil service examinations (assessed immediately after tasting the exam) were positively related to (procedural and distributive) justice perceptions and willingness to recommend the employer to others (assessed one month after the exam, n= 460).
Despite extensive evidence that tests are valid for employee selection, Federal Guidelines have urged employers to seek alternative selection procedures that are equally valid but have less adverse impact on minorities. Research on the validity, adverse impact and fairness of eight categories of alternatives was reviewed. Feasibility of operational use of each type of alternative in an employment setting was also discussed. Only biodata and peer evaluation were supported as having validities substantially equal to those for standardized tests. Previous reviews and more recent research indicated that interviews, self-assessments, reference checks, academic achievement, expert judgment and projective techniques had levels of validity generally below those reported for tests. Data, where available, offered no clear indication that any of the alternatives met the criterion of having equal validity with less adverse impact. Results are discussed and several additional promising alternatives are described.SINCE the first validation studies reported by Munsterberg, specialists in personnel selection have relied heavily on standardized psychological tests. The usefulness of standardized tests for personnel selection is strongly supported. Ghiselli, in his 1966 book, The Validity of Occupational Aptitude Tests, and in a 1973 Personnel Psychology article, summarized the results of hundreds of criterion related validation studies including tests in five major categories: (1) intellectual abilities, (2) spatial and mechanical abilities, (3) perceptual accuracy, (4) motor abilities, and (5) personality tests. Most of ' The authors would like to thank all those members of APA Division 14 who so kindly shared their research findings with us. We would particularly like to thank Mary Tenopyr for her help and advice throughout.
New product development (NPD) speed is a key component of time‐based strategy, which has become increasingly important for managing innovation in a fast‐changing business environment. This meta‐analytic review assesses the generalizability of the relationships between NPD speed and 17 of its antecedents to provide a better understanding of the salient and cross‐situationally consistent factors that affect NPD speed. We grouped the antecedents into four categories of strategy, project, process, and team, and found that process and team characteristics are more generalizable and cross‐situationally consistent determinants of NPD speed than strategy and project characteristics. We also conducted subgroup analyses and found that research method variables, such as level of analysis, source of data, and measurement of speed, moderate the relationships between NPD speed and its antecedents. We apply the study's findings to assess several models of NPD speed, such as the balanced model of product development, the strategic orientation and organizational capability model, the compression vs. the experiential model, the centrifugal and centripetal model, and the product development cycle time model. We also discuss the implications of our findings for research and practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.