We evaluate user experience (UX) when users play and control music with three smart speakers: Amazon's Alexa Echo, Google Home and Apple's Siri on a HomePod. For measuring UX we use five established UX and usability metrics (AttrakDiff, SASSI, SUISQ-R, SUS). We investigated the sensitivity of these five questionnaires in two ways: firstly we compared the UX reported for each of the speakers, secondly we compared the UX of completing easy single tasks and more difficult multi tasks with these speakers. We find that the investigated questionnaires are sufficiently sensitive to show significant differences in UX for these easy and difficult tasks. In addition, we find some significant UX differences between the tested speakers. Specifically, all tested questionnaires, except the SUS, show a significant difference in UX between Siri and Alexa, with Siri being perceived as more user friendly for controlling music. We discuss implications of our work for researchers and practitioners.Speech assistance is a growing market with a 25% yearly growth predicted in the next three years [21]. Speech assistants can be integrated in different devices, like smartphones, personal computers and smart speakers, which are dedicated speakers that can be controlled by voice commands. In our work we focus on smart speakers. Currently one in five Americans over 18 years owns a smart speaker [28], which is a remarkable number, considering that smart speakers were first introduced in 2014 [16]. It means that within six years approximately 53 Million Americans bought a smart speaker, which is a market development comparable to the rapid spread of smart phones [7]. This market trend is not confined to the North American market, but is present throughout the world, in Europe, as well as Asia, Africa and Latin America [32,33,8,17], showing that smart speakers are of broad public interest.The consumer speech assistance market in the English speaking world, as well as in Europe, is dominated by three manufacturers and assistants: Amazon with Alexa, Google with Google Assistant and Apple with Siri [8,36]. These three assistants cover more than 88% of the market in the US [36]. Intuitively, these three assistants are named as the most commonly known Voice User Interfaces (VUIs) [31] and featured as smart speakers in numerous product
We evaluate the user experience (UX) of Amazon's Alexa when users play and control music. For measuring UX we use established UX metrics (SASSI, SUISQ-R, SUS, AttrakDiff). We investigated face validity by asking users to rate how well they think a questionnaire measures what it is supposed to measure and we assessed construct validity by correlating UX scores of questionnaires with each other. We find a mismatch between face and construct validity of the evaluated questionnaires. Specifically, users feel that SASSI represents their experience better than other questionnaires, however this is not supported by correlations between questionnaires, which suggest that all investigated questionnaires measure UX to a similar extent. Importantly, the fact that face validity and construct validity diverge is not surprising as this has been observed before. Our work adds to existing literature by providing face and construct validity scores of UX questionnaires for interactions with the common speech assistant Alexa.
Speech assistants exhibit a high error rate with about one in three user requests resulting in an error. Nonetheless, speech assistants are adopted rapidly with about 1.8 billion users expected in 2021. Given the relatively high task failure rate of speech assistants this may be surprising and raises the question how much user experience (UX) is affected by task success in these devices. We measure user experience with four metrics of UX and evaluate task success in interactions with the speech assistants Alexa, Google Assistant, and Siri. We find that task success only explains between 13% and 30% of the variance of UX. This suggests that a majority of UX is not explained by whether an assistant successfully completes tasks. Moreover, we find that the three assistants do not significantly differ in task success rate, but differ in UX, which supports the conclusion that task success and UX possess limited alignment. We discuss our results and point out limitations and potential future work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.