The Postsecondary Instructional Practices Survey (PIPS) is a valid and reliable measure of self-reported instructional practices of postsecondary instructors, including individuals outside science, technology, engineering, and mathematics. This paper describes the development and validation processes, scoring conventions and results outputs, and applications of the PIPS.
We validate the Measure of Acceptance of the Theory of Evolution (MATE) on undergraduate students using the Rasch model and utilize the MATE to explore qualitatively how students express their acceptance of evolution. At least 24 studies have used the MATE, most with the assumption that it is unidimensional. However, we found that the MATE is best used as two separate dimensions. When used in this way, the MATE produces reliable (α > 0.85) measures for (i) acceptance of evolution facts and data and (ii) acceptance of the credibility of evolution and rejection of non‐scientific ideas. Using k‐means cluster analysis, we found students express their acceptance of evolution in five distinct profiles: (i) uniform high acceptance; (ii) uniform moderate acceptance; (iii) neutral acceptance; (iv) acceptance of facts, but rejection of credibility; and (v) rejection of both facts and credibility. Furthermore, we found that knowledge of macroevolution moderately explains students’ acceptance profiles, corroborating previous claims that teaching macroevolution may be one way to improve students’ acceptance. We use these findings to express the first set of operational definitions of evolution acceptance and propose that educators continue to explore additional ways to operationalize evolution acceptance. © 2017 Wiley Periodicals, Inc. J Res Sci Teach 54:642–671, 2017
Background: Current direct Likert measures for evolution acceptance include the MATE, GAENE, and I-SEA. Pros and cons of each of these instruments have been debated, and yet there is a dearth of research teasing out their similarities and differences when they are used together in a single context beyond the fact that their measures tend to be highly correlated. We administered these to 452 college students in non-major biology classes at two research-intensive universities from the Midwestern and Western United States to investigate the measurement properties of the items within these instruments when combined as a single corpus. Results: Factor analysis using exploratory and confirmatory methods, and Rasch analyses, suggested that a twodimensional factor structure best describes the corpus of items. Whether the item was positively or negatively worded was the key delimiter in its factor assignment. Examination of the highest loading items on the respective factors indicates that the first factor measures acceptance of the truth of evolution and the second factor measures rejection of incredible ideas about evolution. The correlation of these two factors is 0.73, indicating that they share 53% of their variance with each other. When treated unidimensionally, eleven items exhibited potential misfit with the Rasch model. This number dropped to nine items when the two factors were considered. These items, and implications for future use of the MATE, GAENE, and I-SEA together, are discussed in detail. Conclusions: This study is the first analysis of the MATE, GAENE, and I-SEA as a single corpus of items, and yet corroborates previous work showing that these instruments yield measures with highly similar quantitative interpretations. This study also corroborates the effect of negative item wording on how college students interpret the item. While this finding can be applied to college-level students taking undergraduate non-majors biology coursework, work with more advanced biology students has demonstrated that this apparent item wording effect tends to disappear as students advance and become more accepting of evolution. We conclude that despite apparent epistemological differences between the MATE, GAENE, and I-SEA, these can be treated as a single set of items measuring a single factor or two factors without significant loss of quantitative interpretability.
Background: Collecting data on instructional practices is an important step in planning and enacting meaningful initiatives to improve undergraduate science instruction. Self-report survey instruments are one of the most common tools used for collecting data on instructional practices. This paper is an instrument-and item-level analysis of available instructional practice instruments to survey postsecondary instructional practices. We qualitatively analyzed the instruments to document their features and methodologically sorted their items into autonomous categories based on their content. The paper provides a detailed description and evaluation of the instruments, identifies gaps in the literature, and provides suggestions for proper instrument selection, use, and development based on these findings. Results: The 12 instruments we analyzed use a variety of measurement and development approaches. There are two primary instrument types: those intended for all postsecondary instructors and those intended for instructors in a specific STEM discipline. The instruments intended for all instructors often focus on teaching as well as other aspects of faculty work. The number of teaching practice items and response scales varied widely. Most teaching practice items referred to the format of in-class instruction (54 %), such as group work or problem solving. Another important type of teaching practice items referred to assessment practices (35 %), frequently focusing on specific types of summative assessment items used. Conclusions: The recent interest in describing teaching practices has led to the development of a diverse set of available self-report instruments. Many instruments lack an audit trail of their development, including rationale for response scales; whole instrument and construct reliability values; and face, construct, and content validity measures. Future researchers should consider building on these existing instruments to address some of their current weaknesses. In addition, there are important aspects of instruction that are not currently described in any of the available instruments. These include laboratory-based instruction, hybrid and online instructional environments, and teaching with elements of universal design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.