In a series of eight studies it is shown that the first peak in the horizontal autocorrelation of the image of a word (which captures the similarity in shape between the neighbouring strokes of letters) determines (i) the appearance of the words as striped; (ii) the speed with which the words are read, both aloud and silently; and (iii) the speed with which a paragraph of text can be searched. By subtly distorting the horizontal dimension of text, and thereby reducing the first peak in the horizontal autocorrelation, it is shown that the speed of word recognition can be increased. The increase in speed is greater in poor readers.
Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with increasing amounts of data. There are challenges in adoption, however, as outputs of such systems may be difficult to trust, for a variety of factors. We conducted a naturalistic study using the Critical Incident Technique (CIT) to identify which factors were present in incidents where trust in an AI technology used in intelligence work (i.e., the collection, processing, analysis, and dissemination of intelligence) was gained or lost. We found that explainability and performance of the AI were the most prominent factors in responses; however, several other factors affected the development of trust. Further, most incidents involved two or more trust factors, demonstrating that trust is a multifaceted phenomenon. We also conducted a broader thematic analysis to identify other trends in the data. We found that trust in AI is often affected by the interaction of other people with the AI (i.e., people who develop it or use its outputs), and that involving end users in the development of the AI also affects trust. We provide an overview of key findings, practical implications for design, and possible future areas for research.
Mental models describe an internal representation of knowledge of an individual or group, which can be used to interpret interactions with their environment and provide insight into decision-making strategies and prediction of performance. There are several ways to elicit mental models and analyze them; however, there is little guidance for selecting an appropriate elicitation method. Depending on different constraints of research and desired outcomes, different elicitation methods are more appropriate than others. Three criteria were identified as useful for selecting an appropriate elicitation method. These were the interaction level with participants, the number of participants being evaluated, and the resulting level of analytical detail that is required. A process for selecting the most appropriate mental model elicitation method is herein presented. Additionally, an overview of the factors that affect the selection of the mental models, and the different types of mental models are also presented.
Artificial Intelligence (AI) is becoming ubiquitous in national security work (intelligence, defense, etc.); however, introducing AI into work systems is fraught with challenges. Trust is gained and lost through experiences, and there are many factors that affect trust in AI. Similarly, users adapt their workflows based on trust in these systems. We used a naturalistic approach to understand how intelligence professionals adapted their work practices after gaining or losing trust in AI. We found a variety of adaptations, which were characterized as either being task-based or frequency-based; where users added or removed tasks from their workflow or where they changed the frequency in which they used the AI in their workflow, respectively. We provide specific examples and quotes from participants along with findings, and discuss potential methodological implications for studying and designing AI-driven work systems.
The System Usability Scale (SUS) is a popular method to measure the subjective usability of a system, due largely to the simplicity and rapidity of both collecting and analyzing data. A drawback is that the SUS generates a single unidimensional usability score from 0-100. Several researchers have amassed larger datasets across multiple projects to allow for analysis on additional methods to glean insights from the SUS survey. Along these lines, we investigate the practical value of extending the SUS survey with additional items such as open text responses, and test underlying assumptions of how SUS results are interpreted. We found that while a lower SUS score does generally correlate to a stronger desire to modify the system, people generally want to make modifications to a system regardless of its usability. Further, we found that the amount of user feedback related to modifications to a system provided predicted subjective usability ratings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.