-Sound source localization on a mobile robot can be a difficult task due to a variety of problems inherent to a real environment, including robot ego-noise, echoes, and the transient nature of ambient noise. As a result, source localization data are often very noisy and unreliable. In this work, we overcome some of these problems by combining the localization evidence over a variety of robot poses using an evidence grid. The result is a representation that localizes the pertinent objects well over time, can be used to filter poor localization results, and may also be useful for global re-localization from sound localization results.
Abstract-We propose a vibrotactile interface in the form of a belt for guiding blind walkers. This interface enables blind walkers to receive haptic directional instructions along complex paths without negatively impacting users' ability to listen and/or perceive the environment the way some auditory directional instructions do. The belt interface was evaluated in a controlled study with 10 blind individuals and compared to the audio guidance. The experiments were videotaped and the participants' behaviors and comments were content analyzed. Completion times and deviations from ideal paths were also collected and statistically analyzed. By triangulating the quantitative and qualitative data, we found that the belt resulted in closer path following to the expense of speed. In general, the participants were positive about the use of vibrotactile belt to provide directional guidance.
In this work, we describe an autonomous mobile robotic system for finding, investigating, and modeling ambient noise sources in the environment. The system has been fully implemented in two different environments, using two different robotic platforms and a variety of sound source types. Making use of a two-step approach to autonomous exploration of the auditory scene, the robot first quickly moves through the environment to find and roughly localize unknown sound sources using the auditory evidence grid algorithm. Then, using the knowledge gained from the initial exploration, the robot investigates each source in more depth, improving upon the initial localization accuracy, identifying volume and directivity, and, finally, building a classification vector useful for detecting the sound source in the future.
Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot ego-noise, echoes, and human interference are all common sources of decreased intelligibility. In real-world environments, however, these common problems are supplemented with many different types of background noise sources. For instance, military scenarios might be punctuated by high decibel plane noise and bursts from weaponry that mask parts of the speech output from the robot. Even in non-military settings, however, fans, computers, alarms, and transportation noise can cause enough interference that they might render a traditional speech interface unintelligible. In this work, we seek to overcome these problems by applying robotic advantages of sensing and mobility to a textto-speech interface. Using perspective taking skills to predict how the human user is being affected by new sound sources, a robot can adjust its speaking patterns and/or reposition itself within the environment to limit the negative impact on intelligibility, making a speech interface easier to use.
We present an approach that uses Q-learning
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.