With the emergence of conversational artificial intelligence (AI) agents, it is important to understand the mechanisms that influence users' experiences of these agents. In this paper, we study one of the most common tools in the designer's toolkit: conceptual metaphors. Metaphors can present an agent as akin to a wry teenager, a toddler, or an experienced butler. How might a choice of metaphor influence our experience of the AI agent? Sampling a set of metaphors along the dimensions of warmth and competence-defined by psychological theories as the primary axes of variation for human social perception-we perform a study (N = 260) where we manipulate the metaphor, but not the behavior, of a Wizard-of-Oz conversational agent. Following the experience, participants are surveyed about their intention to use the agent, their desire to cooperate with the agent, and the agent's usability. Contrary to the current tendency of designers to use high competence metaphors to describe AI products, we find that metaphors that signal low competence lead to better evaluations of the agent than metaphors that signal high competence. This effect persists despite both high and low competence agents featuring identical, human-level performance and the wizards being blind to condition. A second study confirms that intention to adopt decreases rapidly as competence projected by the metaphor increases. In a third study, we assess effects of metaphor choices on potential users' desire to try out the system and find that users are drawn to systems that project higher competence and warmth. These results suggest that projecting competence may help attract new users, but those users may discard the agent unless it can quickly correct with a lower competence metaphor. We close with a retrospective analysis that finds similar patterns between metaphors and user attitudes towards past conversational agents such as Xiaoice, Replika, Woebot, Mitsuku, and Tay.CCS Concepts: • Human-centered computing → Empirical studies in collaborative and social computing; Empirical studies in HCI; Human computer interaction (HCI).
Despite their pervasiveness, current text-based conversational agents (chatbots) are predominantly monolingual, while users are often multilingual. It is well-known that multilingual users mix languages while interacting with others, as well as in their interactions with computer systems (such as query formulation in text-/voice-based search interfaces and digital assistants). Linguists refer to this phenomenon as code-mixing or code-switching. Do multilingual users also prefer chatbots that can respond in a code-mixed language over those which cannot? In order to inform the design of chatbots for multilingual users, we conduct a mixed-method user-study (N=91) where we examine how conversational agents, that code-mix and reciprocate the users' mixing choices over multiple conversation turns, are evaluated and perceived by bilingual users. We design a human-in-the-loop chatbot with two different code-mixing policies -- (a) always code-mix irrespective of user behavior, and (b) nudge with subtle code-mixed cues and reciprocate only if the user, in turn, code-mixes. These two are contrasted with a monolingual chatbot that never code-mixed. Users are asked to interact with the bots, and provide ratings on perceived naturalness and personal preference. They are also asked open-ended questions around what they (dis)liked about the bots. Analysis of the chat logs, users' ratings, and qualitative responses reveal that multilingual users strongly prefer chatbots that can code-mix. We find that self-reported language proficiency is the strongest predictor of user preferences. Compared to the Always code-mix policy, Nudging emerges as a low-risk low-gain policy which is equally acceptable to all users. Nudging as a policy is further supported by the observation that users who rate the code-mixing bot higher typically tend to reciprocate the language mixing pattern of the bot. These findings present a first step towards developing conversational systems that are more human-like and engaging by virtue of adapting to the users' linguistic style.
When members of ad-hoc virtual teams need to collectively ideate or deliberate, they often fail to engage with each others' perspectives in a constructive manner. At best, this leads to sub-optimal outcomes, and, at worst, it can cause conflicts that lead to teams not wanting to continue working together. Prior work has attempted to facilitate constructive communication by highlighting problematic communication patterns and nudging teams to alter their interaction norms. However, these approaches achieve limited success because they fail to acknowledge two social barriers: (1) it is hard to reset team norms mid-interaction, and (2) corrective nudges have limited utility unless team members believe it is safe to voice their opinion and that their opinion will be heard. This paper introduces Empathosphere, a chat-embedded intervention to mitigate these barriers and foster constructive communication in teams. To mitigate the first barrier, Empathosphere leverages the known benefits of "experimental spaces" in dampening existing norms and creating a climate conducive to change. Empathosphere instantiates this "space'' as a separate communication channel in a team's workspace. To mitigate the second barrier, Empathosphere harnesses the benefits of perspective-taking to cultivate a group climate that promotes a norm of members speaking up and engaging with each other. Empathosphere achieves this by orchestrating authentic socio-emotional exchanges designed to induce perspective-taking. A controlled study ($N=110$) compared Empathosphere to an alternate intervention strategy of prompting teams to reflect on their team experience. We found that Empathosphere led to higher work satisfaction, encouraged more open communication and feedback within teams, and boosted teams' desire to continue working together. This work demonstrates that "experimental spaces," particularly those that integrate methods of encouraging perspective-taking, can be a powerful means of improving communication in virtual teams.
To support the massive data requirements of modern supervised machine learning (ML) algorithms, crowdsourcing systems match volunteer contributors to appropriate tasks. Such systems learn what types of tasks contributors are interested to complete. In this paper, instead of focusing on what to ask, we focus on learning how to ask: how to make relevant and interesting requests to encourage crowdsourcing participation. We introduce a new technique that augments questions with ML-based request strategies drawn from social psychology. We also introduce a contextual bandit algorithm to select which strategy to apply for a given task and contributor. We deploy our approach to collect volunteer data from Instagram for the task of visual question answering (VQA), an important task in computer vision and natural language processing that has enabled numerous human-computer interaction applications. For example, when encountering a user’s Instagram post that contains the ornate Trevi Fountain in Rome, our approach learns to augment its original raw question “Where is this place?” with image-relevant compliments such as “What a great statue!” or with travel-relevant justifications such as “I would like to visit this place”, increasing the user’s likelihood of answering the question and thus providing a label. We deploy our agent on Instagram to ask questions about social media images, finding that the response rate improves from 15.8% with unaugmented questions to 30.54% with baseline rule-based strategies and to 58.1% with ML-based strategies.
Initiating conversations with new people at work is often intimidating because of uncertainty about their interests. People worry others may reject their attempts to initiate conversation or that others may not enjoy the conversation. We introduce a new system, Nooks, built on Slack, that reduces fear of social evaluation by enabling individuals to initiate any conversation as a nook-a conversation room that identifies its topic, but not its creator. Automatically convening others interested in the nook, Nooks further reduces fears of social evaluation by guaranteeing individuals in advance that others they are about to interact with are interested in the conversation. In a multi-month deployment with participants in a summer research program, Nooks provided participants with non-threatening and inclusive interaction opportunities, and ambient awareness, leading to new interactions online and offline. Our results demonstrate how intentionally designed social spaces can reduce fears of social evaluation and catalyze new workplace connections. CCS CONCEPTS• Human-centered computing → Collaborative and social computing systems and tools.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.