Traditionally underserved students (TUSs), including Black, Latinx, American Native, and low-socioeconomic (SES) students, have higher rates of departure from STEM undergraduate programs than their more privileged peers. These higher departure rates are associated with TUSs' lower performance in STEM gatekeeper courses compared to non-STEM courses through their sophomore year. Flipped models of instruction when used in gatekeeper chemistry courses are broadly shown to improve student course performance (higher course grades; reduced W/D/F rates). However, there is no clear evidence that flipped models specifically improve course performance for TUSs. This study's objective was to determine the impact of a flipped model on students' course performance in General Chemistry I on the basis of their race/ethnicity and SES. Using a nonparallel quasi-experimental design, student performance by race/ethnicity and SES in the flipped model course was compared to that of students in the traditional course. Results show TUSs were significantly more likely to have higher course grades in the flipped model course as compared to the traditional course. Further, the performance gap was closed between Black and Latinx students and their White/Asian peers in the flipped model. However, a performance gap between low-SES and middle-to high-SES students emerged in the flipped model. The W/D/F rate was decreased in the flipped model for all student groups. Therefore, although flipped models are not a panacea, they can be one critical support strategy used in freshman and sophomore chemistry gatekeeper courses to mitigate TUSs' departure from STEM undergraduate programs.
There has been a recent rapid increase in the number of primary studies comparing the impacts of flipped to traditional instruction in undergraduate chemistry courses. Across these studies, there are wide variations in flipped model design, implementation, and reported impacts. To investigate these variations, 28 primary peer-reviewed studies were systematically analyzed. There were three notable trends. First, compared to final exams, course GPA seems to be the more sensitive measure of significant gains in students’ overall academic performance. Second, courses reporting significant gains in course GPA concertedly used (i) an extrinsic motivational tool for students to complete pre- and in-class activities, (ii) responsive mini-lecturing as an in-class instructional strategy, and (iii) the optional flipped model feature of independent postclass problem solving. In stark contrast, studies reporting no difference in course GPA rarely incentivized student completion of pre- and in-class activities, and none used responsive mini-lecturing or postclass problem solving. It was difficult to determine robust trends in impacts on various student populations as impacts were seldomly disaggregated by descriptors such as sex, race/ethnicity, and income level. Third, although there was a clear trend of constructivism being used as the theoretical framework for flipped courses, extrinsic motivation potentially plays a key role in the model’s impact. Instructor ability or desire to motivate students to engage with learning, however, was not addressed in most studies. These trends imply - more research is needed to determine impacts of flipped courses on diverse student populations and the role of instructor beliefs and ability to motivate students to engage with learning in a flipped course. Such research should be used to advance the theoretical understanding of how, why, and in what contexts flipped courses positively and significantly impact diverse students’ academic performance.
The purpose of the Stakeholder Playbook is to enable system developers to take into account the different ways in which stakeholders need to "look inside" of the AI/XAI systems. Recent work on Explainable AI has mapped stakeholder categories onto explanation requirements. While most of these mappings seem reasonable, they have been largely speculative. We investigated these matters empirically. We conducted interviews with senior and mid-career professionals possessing post-graduate degrees who had experience with AI and/ or autonomous systems, and who had served in a number of roles including former military, civilian scientists working for the government, scientists working in the private sector, and scientists working as independent consultants. The results show that stakeholders need access to others (e.g., trusted engineers, trusted vendors) to develop satisfying mental models of AI systems. and they need to know "how it fails" and "how it misleads" and not just "how it works." In addition, explanations need to support end-users in performing troubleshooting and maintenance activities, especially as operational situations and input data change. End-users need to be able to anticipate when the AI is approaching an edge case. Stakeholders often need to develop an understanding that enables them to explain the AI to someone else and not just satisfy their own sensemaking. We were surprised that only about half of our Interviewees said they always needed better explanations. This and other findings that are apparently paradoxical can be resolved by acknowledging that different stakeholders have different capabilities, different sensemaking requirements, and different immediate goals. In fact, the concept of “stakeholder” is misleading because the people we interviewed served in a variety of roles simultaneously — we recommend referring to these roles rather than trying to pigeonhole people into unitary categories. Different cognitive styles re another formative factor, as suggested by participant comments to the effect that they preferred to dive in and play with the system rather than being spoon-fed an explanation of how it works. These factors combine to determine what, for each given end-user, constitutes satisfactory and actionable understanding. exp
When people make plausibility judgments about an assertion, an event, or a piece of evidence, they are gauging whether it makes sense. Therefore, we can treat plausibility judgments as sensemaking activities. In this paper, we review the research literature, presenting the different ways that plausibility has been defined and measured. Then we describe the research program that allowed us to formulate our sensemaking perspective on plausibility. The model is based on an analysis of 23 cases, most of which involved understanding and interacting with information technology. The resulting model describes the user’s attempts to construct a narrative as a state transition string, relying on plausibility judgments.
Recent theories of expertise and expert performance emphasize effort over talent. Specifically, the amount of deliberate practice that performers accumulate has been strongly correlated with their level of expertise in domains including chess, music, and sports. Indeed, it is widely accepted that becoming an expert requires an average of 10,000 hours, or 10 years, of deliberate practice—that is, activities directed by an instructor or coach that are designed to improve specific aspects of performance in measureable ways that offer timely feedback and refinement of skills through repetition. While it is easy to envision deliberate practice by aspiring athletes and musicians, however, many domains of performance do not have established cultures of practice. In particular, consciously incorporating deliberate practice during college-based professional education and deliberate performance during the career work of professionals (who typically have little time to “practice”) can accelerate the development of professionals to expert levels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.