SummaryCourtship in Drosophila melanogaster offers a powerful experimental paradigm for the study of innate sexually dimorphic behaviors [1, 2]. Fruit fly males exhibit an elaborate courtship display toward a potential mate [1, 2]. Females never actively court males, but their response to the male’s display determines whether mating will actually occur. Sex-specific behaviors are hardwired into the nervous system via the actions of the sex determination genes doublesex (dsx) and fruitless (fru) [1]. Activation of male-specific dsx/fru+ P1 neurons in the brain initiates the male’s courtship display [3, 4], suggesting that neurons unique to males trigger this sex-specific behavior. In females, dsx+ neurons play a pivotal role in sexual receptivity and post-mating behaviors [1, 2, 5, 6, 7, 8, 9]. Yet it is still unclear how dsx+ neurons and dimorphisms in these circuits give rise to the different behaviors displayed by males and females. Here, we manipulated the function of dsx+ neurons in the female brain to investigate higher-order neurons that drive female behaviors. Surprisingly, we found that activation of female dsx+ neurons in the brain induces females to behave like males by promoting male-typical courtship behaviors. Activated females display courtship toward conspecific males or females, as well other Drosophila species. We uncovered specific dsx+ neurons critical for driving male courtship and identified pheromones that trigger such behaviors in activated females. While male courtship behavior was thought to arise from male-specific central neurons, our study shows that the female brain is equipped with latent courtship circuitry capable of inducing this male-specific behavioral program.
We evaluate user experience (UX) when users play and control music with three smart speakers: Amazon's Alexa Echo, Google Home and Apple's Siri on a HomePod. For measuring UX we use five established UX and usability metrics (AttrakDiff, SASSI, SUISQ-R, SUS). We investigated the sensitivity of these five questionnaires in two ways: firstly we compared the UX reported for each of the speakers, secondly we compared the UX of completing easy single tasks and more difficult multi tasks with these speakers. We find that the investigated questionnaires are sufficiently sensitive to show significant differences in UX for these easy and difficult tasks. In addition, we find some significant UX differences between the tested speakers. Specifically, all tested questionnaires, except the SUS, show a significant difference in UX between Siri and Alexa, with Siri being perceived as more user friendly for controlling music. We discuss implications of our work for researchers and practitioners.Speech assistance is a growing market with a 25% yearly growth predicted in the next three years [21]. Speech assistants can be integrated in different devices, like smartphones, personal computers and smart speakers, which are dedicated speakers that can be controlled by voice commands. In our work we focus on smart speakers. Currently one in five Americans over 18 years owns a smart speaker [28], which is a remarkable number, considering that smart speakers were first introduced in 2014 [16]. It means that within six years approximately 53 Million Americans bought a smart speaker, which is a market development comparable to the rapid spread of smart phones [7]. This market trend is not confined to the North American market, but is present throughout the world, in Europe, as well as Asia, Africa and Latin America [32,33,8,17], showing that smart speakers are of broad public interest.The consumer speech assistance market in the English speaking world, as well as in Europe, is dominated by three manufacturers and assistants: Amazon with Alexa, Google with Google Assistant and Apple with Siri [8,36]. These three assistants cover more than 88% of the market in the US [36]. Intuitively, these three assistants are named as the most commonly known Voice User Interfaces (VUIs) [31] and featured as smart speakers in numerous product
Error messages are frequent in interactions with Conversational User Interfaces (CUI). Smart speakers respond to about every third user request with an error message. Errors can heavily affect user experience (UX) in interaction with CUI. However, there is limited research on how error responses should be formulated. In this paper, we present a system to study how people classify different categories (acknowledgement of user sentiment, acknowledgement of error and apology) of error messages, and evaluate peoples' preference of error responses with clear categories. The results indicate that if an error response has only one element (i.e. neutral acknowledgement of error, apology or sentiment), responses that acknowledge errors neutrally are preferred by participants. Moreover, we find that when interviewed, participants like error messages to include an apology, an explanation of what went wrong, and a suggestion how to fix the problem in addition to a neutral acknowledgement of an error. Our study has two main contributions: (1) our results inform about the design of error messages and (2) we present a framework for error response categorization and validation.
We evaluate the user experience (UX) of Amazon's Alexa when users play and control music. For measuring UX we use established UX metrics (SASSI, SUISQ-R, SUS, AttrakDiff). We investigated face validity by asking users to rate how well they think a questionnaire measures what it is supposed to measure and we assessed construct validity by correlating UX scores of questionnaires with each other. We find a mismatch between face and construct validity of the evaluated questionnaires. Specifically, users feel that SASSI represents their experience better than other questionnaires, however this is not supported by correlations between questionnaires, which suggest that all investigated questionnaires measure UX to a similar extent. Importantly, the fact that face validity and construct validity diverge is not surprising as this has been observed before. Our work adds to existing literature by providing face and construct validity scores of UX questionnaires for interactions with the common speech assistant Alexa.
Males in numerous animal species use mating songs to attract females and intimidate competitors. We demonstrate that modulations in song amplitude are behaviourally relevant in the fruit fly Drosophila. We show that Drosophila melanogaster females prefer amplitude modulations that are typical of melanogaster song over other modulations, which suggests that amplitude modulations are processed auditorily by D. melanogaster. Our work demonstrates that receivers can decode messages in amplitude modulations, complementing the recent finding that male flies actively control song amplitude. To describe amplitude modulations, we propose the concept of song amplitude structure (SAS) and discuss similarities and differences to amplitude modulation with distance (AMD).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.