COVID-19 has severely impacted mental health in vulnerable demographics, in particular older adults, who face unprecedented isolation. Consequences, while globally severe, are acutely pronounced in low- and middle-income countries (LMICs) confronting pronounced gaps in resources and clinician accessibility. Social robots are well-recognized for their potential to support mental health, yet user compliance (i.e., trust) demands seamless affective human-robot interactions; natural ‘human-like’ conversations are required in simple, inexpensive, deployable platforms. We present the design, development, and pilot testing of a multimodal robotic framework fusing verbal (contextual speech) and nonverbal (facial expressions) social cues, aimed to improve engagement in human-robot interaction and ultimately facilitate mental health telemedicine during and beyond the COVID-19 pandemic. We report the design optimization of a hybrid face robot, which combines digital facial expressions based on mathematical affect space mapping with static 3D facial features. We further introduce a contextual virtual assistant with integrated cloud-based AI coupled to the robot’s facial representation of emotions, such that the robot adapts its emotional response to users’ speech in real-time. Experiments with healthy participants demonstrate emotion recognition exceeding 90% for happy, tired, sad, angry, surprised and stern/disgusted robotic emotions. When separated, stern and disgusted are occasionally transposed (70%+ accuracy overall) but are easily distinguishable from other emotions. A qualitative user experience analysis indicates overall enthusiastic and engaging reception to human-robot multimodal interaction with the new framework. The robot has been modified to enable clinical telemedicine for cognitive engagement with older adults and people with dementia (PwD) in LMICs. The mechanically simple and low-cost social robot has been deployed in pilot tests to support older individuals and PwD at the Schizophrenia Research Foundation (SCARF) in Chennai, India. A procedure for deployment addressing challenges in cultural acceptance, end-user acclimatization and resource allocation is further introduced. Results indicate strong promise to stimulate human-robot psychosocial interaction through the hybrid-face robotic system. Future work is targeting deployment for telemedicine to mitigate the mental health impact of COVID-19 on older adults and PwD in both LMICs and higher income regions.
Socially assistive robots (SAR) hold significant potential to assist older adults and people with dementia in human engagement and clinical contexts by supporting mental health and independence at home. While SAR research has recently experienced prolific growth, long-term trust, clinical translation and patient benefit remain immature. Affective human-robot interactions are unresolved and the deployment of robots with conversational abilities is fundamental for robustness and humanrobot engagement. In this paper, we review the state of the art within the past two decades, design trends, and current applications of conversational affective SAR for ageing and dementia support. A horizon scanning of AI voice technology for healthcare, including ubiquitous smart speakers, is further introduced to address current gaps inhibiting home use. We discuss the role of user-centred approaches in the design of voice systems, including the capacity to handle communication breakdowns for effective use by target populations. We summarise the state of development in interactions using speech and natural language processing, which forms a baseline for longitudinal health monitoring and cognitive assessment. Drawing from this foundation, we identify open challenges and propose future directions to advance conversational affective social robots for: 1) user engagement, 2) deployment in real-world settings, and 3) clinical translation.
Electroencephalogram (EEG) undergoes complex temporal and spectral changes during voluntary movement intention. Characterization of such changes has focused mostly on narrowband spectral processes such as Event-Related Desynchronization (ERD) in the sensorimotor rhythms because EEG is mostly considered as emerging from oscillations of the neuronal populations. However, the changes in the temporal dynamics, especially in the broadband arrhythmic EEG have not been investigated for movement intention detection. The Long-Range Temporal Correlations (LRTC) are ubiquitously present in several neuronal processes, typically requiring longer timescales to detect. In this paper, we study the ongoing changes in the dynamics of long- as well as short-range temporal dependencies in the single trial broadband EEG during movement intention. We obtained LRTC in 2 s windows of broadband EEG and modeled it using the Autoregressive Fractionally Integrated Moving Average (ARFIMA) model which allowed simultaneous modeling of short- and long-range temporal correlations. There were significant (p < 0.05) changes in both broadband long- and short-range temporal correlations during movement intention and execution. We discovered that the broadband LRTC and narrowband ERD are complementary processes providing distinct information about movement because eliminating LRTC from the signal did not affect the ERD and conversely, eliminating ERD from the signal did not affect LRTC. Exploring the possibility of applications in Brain Computer Interfaces (BCI), we used hybrid features with combinations of LRTC, ARFIMA, and ERD to detect movement intention. A significantly higher (p < 0.05) classification accuracy of 88.3 ± 4.2% was obtained using the combination of ARFIMA and ERD features together, which also predicted the earliest movement at 1 s before its onset. The ongoing changes in the long- and short-range temporal correlations in broadband EEG contribute to effectively capturing the motor command generation and can be used to detect movement successfully. These temporal dependencies provide different and additional information about the movement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.