The study of human nonverbal social behaviors has taken a more quantitative and computational approach in recent years due to the development of smart interfaces and virtual agents or robots able to interact socially. One of the most interesting nonverbal social behaviors, producing a characteristic vocal signal, is laughing. Laughter is produced in several different situations: in response to external physical, cognitive, or emotional stimuli; to negotiate social interactions; and also, pathologically, as a consequence of neural damage. For this reason, laughter has attracted researchers from many disciplines. A consequence of this multidisciplinarity is the absence of a holistic vision of this complex behavior: the methods of analysis and classification of laughter, as well as the terminology used, are heterogeneous; the findings sometimes contradictory and poorly documented. This survey aims at collecting and presenting objective measurement methods and results from a variety of different studies in different fields, to contribute to build a unified model and taxonomy of laughter. This could be successfully used for advances in several fields, from artificial intelligence and human-robot interaction to medicine and psychiatry.
Automatic and objective detection algorithms for gait events from MEMS Inertial Measurement Units data have been developed to overcome subjective inaccuracy in traditional visual observation. Their accuracy and sensitivity have been verified with healthy older adults, Parkinson's disease and spinal injured patients, using single-task gait exercises, where events are precise as the subject is focusing only on walking. Multi-task walking instead simulates a more realisitc and challenging scenario where subjects perform secondary cognitive task while walking, so it is a better benchmark. In this paper, we test two algorithms based on shank and foot angular velocity data in single-task, dual-task and multi-task walking. Results show that both algorithms fail when the subject slows extremely down or pauses due to high cognitive and attentional load, and, in particular, the first stride detection error rate of the foot-based algorithm increases. Stride time is accurate with both algorithms regardless of walking types, but the shankbased algorithm leads to an overestimation on the proportion of swing phase in one gait cycle. Increasing the number of cognitive tasks also causes this error with both algorithms.
Rapid localization of injured survivors by rescue teams to prevent death is a major issue. In this paper, a sensor system for human rescue including three different types of sensors, a CO2 sensor, a thermal camera, and a microphone, is proposed. The performance of this system in detecting living victims under the rubble has been tested in a high-fidelity simulated disaster area. Results show that the CO2 sensor is useful to effectively reduce the possible concerned area, while the thermal camera can confirm the correct position of the victim. Moreover, it is believed that the use of microphones in connection with other sensors would be of great benefit for the detection of casualties. In this work, an algorithm to recognize voices or suspected human noise under rubble has also been developed and tested.
Talented musicians can deliver a powerful emotional experience to the audience by skillfully modifying several musical parameters, such as dynamics, articulation, and tempo. Musical robots are expected to control those musical parameters in the same way to give the audience an experience comparable to listening to a professional human musician. But practical control of those parameters depends on the type of musical instrument being played. In this study, we describe our newly developed music dynamics control system for the Waseda Anthropomorphic Saxophonist robot. We first built a physical model for the saxophone reed motion and verified the dynamics-related parameters of the overall robot-saxophone system. We found that the magnitude of air flow is related to the sound pressure level, as expected, but also that the lower lip is critical to the sound stability. Accordingly, we then implemented a music dynamics control system for the robot and succeeded in enabling the robot to perform a music piece with different sound pressure levels. Index Terms-Entertainment robotics, human-centered robotics, humanoid robots. I. INTRODUCTION M USIC is a social activity that can powerfully influence large groups of people. A skillful musician can elicit powerful emotions in the audience by careful modulation of several different musical parameters, such as dynamics, tempo, articulation and pitch [1]. In the emerging field of entertainment robotics, musical robots are attracting attention for their multiuser interactive experience potential [2]. With their musical performance abilities, these robots are expected to entertain and interact with a large crowd.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.