Objective: The research examined how humans attribute blame to humans, nonautonomous robots, autonomous robots, or environmental factors for scenarios in which errors occur. Background: When robots and humans serve on teams, human perception of their technological team members can be a critical component of successful cooperation, especially when task completion fails. Methods: Participants read a set of scenarios that described human–robot team task failures. Separate scenarios were written to emphasize the role of the human, the robot, or environmental factors in producing the task failure. After reading each scenario, the participants allocated blame for the failure among the human, robot, and environmental factors. Results: In general, the order of amount of blame was humans, robots, and environmental factors. If the scenario described the robot as nonautonomous, the participants attributed almost as little blame to them as to the environmental factors; in contrast, if the scenario described the robot as autonomous, the participants attributed almost as much blame to them as to the human. Conclusion: We suggest that humans use a hierarchy of blame in which robots are seen as partial social actors, with the degree to which people view them as social actors depending on the degree of autonomy. Application: The acceptance of robots by human co-workers will be a function of the attribution of blame when errors occur in the workplace. The present research suggests that greater autonomy for the robot will result in greater attribution of blame in work tasks.
Mental models are mental representations of the external world that humans constantly use when they interact with the environment and systems within it. These mental models are in part constituted by an underlying structure of associated concepts that are modified as a person gains experience with a system or domain. Video games provide a context that encourages the development of sophisticated mental models. The current research sought to understand how mental model structures differ between video game players of varying experience levels. Participants were recruited both over internet forums and through Mechanical Turk. Mental model structures were measured using relatedness ratings between pairs of concepts that were derived from players with high levels of experience playing League of Legends. Relatedness ratings were transformed into Pathfinder networks that were used to analyze mental model structures. Results revealed structural differences in mental models between experience levels. A three-stage model of mental model structure development is proposed to explain the results, which suggest that some structural characteristics appear earlier in mental model development than others. The role of mental model structural characteristics is discussed in light of both the design of training programs and video games.
Currently, many alert systems designed for EHRs can negatively affect providers' ability to prescribe appropriately and thus, are also affecting patient safety. This is due to the design of the alerts and the overwhelming number of alerts presented. There is a wealth of information in the human factors literature on the design and use of alerts but that information is either not known or has not been implemented in the design of EHRs. This paper will bring that literature to light, and demonstrate how to implement the recommendations to develop not only a better system for handling alerts in EHRs, but better CPOE (Computerized Provider Order Entry) as well. Serious usability issues with medication-based alerts displayed on EHRs were found after reviewing the data from the Meaningful Use 2 (MU2) Usability reports linked to the ONC CHPL website. The types of alert system issues found within the MU2 Usability reports and across various EHR vendors as well as human factors research that specifies how to deal with these issues will be described in this paper. In addition, recommendations for the design of medication-based alerts will be presented.
This report presents data from summative usability tests conducted by User-View on behalf of multiple vendors as part of MU2 certification. The objective is to present findings related to performance metrics and use errors associated with each prioritized certification criteria, shine the light on EHR effectiveness, and contribute to ongoing discussions of EHR usability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.