Abstract. Given the numerous constraints of onscreen keyboards, such as smaller keys and lack of tactile feedback, remembering and typing long, complex passwords -an already burdensome task on desktop computing systems -becomes nearly unbearable on small mobile touchscreens. Complex passwords require numerous screen depth changes and are problematic both motorically and cognitively. Here we present baseline data on device-and agedependent differences in human performance with complex passwords, providing a valuable starting dataset to warn that simply porting password requirements from one platform to another (i.e., desktop to mobile) without considering device constraints may be unwise.
The artificial intelligence (AI) revolution is upon us, with the promise of advances such as 98 driverless cars, smart buildings, automated health diagnostics and improved security 99 monitoring. In fact, many people already have AI in their lives as "personal" assistants that 100 allow them to search the internet, make phone calls, and create reminder lists through voice 101 commands. Whether consumers know that those systems are AI is unclear. However, reliance on those systems implies that they are deemed trustworthy to some degree. Many current efforts are aimed to assess AI system trustworthiness through measurements of Accuracy, Reliability, and Explainability, among other system characteristics. While these characteristics are necessary, determining that the AI system is trustworthy because it meets its system requirements won't ensure widespread adoption of AI. It is the user, the human affected by the AI, who ultimately places their trust in the system. The study of trust in automated systems has been a topic of psychological study previously. However, artificial intelligence systems pose unique challenges for user trust. AI systems operate using patterns in massive amounts of data. No longer are we asking automation to do human tasks, we are asking it to do tasks that we can't. Moreover, AI has been built to dynamically update its set of beliefs (i.e. "learn"), a process that is not easily understood even by its designers. Because of this complexity and unpredictability, the AI user has to trust the AI, changing the dynamic between user and system into a relationship. Alongside research toward building trustworthy systems, understanding user trust in AI will be necessary in order to achieve the benefits and minimize the risks of this new technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.