With the future availability of highly automated vehicles (AVs), vulnerable road users (VRUs) will encounter vehicles without human operators. To compensate for the lack of eye contact, realizing communication via external human-machine interfaces (eHMIs) is planned. The adequacy of this regarding people with intellectual disabilities (IDs) is, however, still unknown. This work compares eHMI concepts by their perceived user experience (UX) for people with and without ID to evaluate the inclusiveness of current eHMI concepts. We analyzed related work and derived two representative concepts for a visual and an auditory eHMI. Subsequently, a survey of N=120 participants (64 with, 56 without ID) was performed, comparing the perceived UX of the selected eHMI concepts for visual, auditory, and combined modalities, and a baseline without eHMI using videos of simulations. We then had them assessed using the modified user experience questionnaire - short (UEQ-S). We found that auditory eHMIs performed worse than visual or multi-modal ones, and multi-modal concepts performed worse for people with ID in terms of pragmatic quality and crossing decisions. Our insights can be taken by both industry and academia, to make AVs more inclusive.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.