Since its 1947 founding, ETS has conducted and disseminated scientific research to support its products and services, and to advance the measurement and education fields. In keeping with these goals, ETS is committed to making its research freely available to the professional community and to the general public. Published accounts of ETS research, including papers in the ETS Research Report series, undergo a formal peer-review process by ETS staff to ensure that they meet established scientific and professional standards. All such ETS-conducted peer reviews are in addition to any reviews that outside organizations may provide as part of their own publication processes. Peer review notwithstanding, the positions expressed in the ETS Research Report series and other published accounts of ETS research are those of the authors and not necessarily those of the Officers and Trustees of Educational Testing Service.The Daniel Eignor Editorship is named in honor of Dr. Daniel R. Eignor, who from 2001 until 2011 served the Research and Development division as Editor for the ETS Research Report series. The Eignor Editorship has been created to recognize the pivotal leadership role that Dr. Eignor played in the research publication process at ETS.
ETS Research Report Series ISSN 2330-8516 R E S E A R C H R E P O R T
Exploring Methods for Developing Behaviorally Anchored Rating Scales for Evaluating Structured Interview PerformanceHarrison J. Kell, Michelle P. Martin-Raugh, Lauren M. Carney, Patricia A. Inglese, Lei Chen, & Gary FengEducational Testing Service, Princeton, NJ Behaviorally anchored rating scales (BARS) are an essential component of structured interviews. Use of BARS to evaluate interviewees' performance is associated with greater predictive validity and reliability and less bias. BARS are time-consuming and expensive to construct, however. This report explores the feasibility of gathering participants' responses to structured interview questions through an online crowdsourcing platform and using those responses to develop BARS. We describe the development of 12 structured interview questions to assess four applied social skills, elicitation of responses to these questions in the form of critical incidents from 68 respondents, and the creation of BARS from these critical incidents. Results indicate online participants are able to produce responses of sufficient quality to generate BARS for evaluating structured interview performance. We conclude by discussing limitations to this approach and future directions for research and practice.Keywords Amazon Mechanical Turk; behaviorally anchored rating scales; crowdsourcing; employment interviews; performance appraisal; social skills; structured interviews doi:10.1002/ets2.12152 Employment interviews are one of the most popular means of selecting personnel (Levashina, Hartwell, Morgeson, & Campion, 2014;McDaniel, Whetzel, Schmidt, & Maurer, 1994). Structured interviews that present all interviewees with the same standardized questions have higher validities for pre...