Abstract-Audio-based receiver localization in indoor environments has multiple applications including indoor navigation, location tagging, and tracking. Public places like shopping malls and consumer stores often have loudspeakers installed to play music for public entertainment. Similarly, office spaces may have sound conditioning speakers installed to soften other environmental noises. We discuss an approach to leverage this infrastructure to perform audio-based localization of devices requesting localization in such environments, by playing barely audible controlled sounds from multiple speakers at known positions. Our approach can be used to localize devices such as smart-phones, tablets and laptops to sub-meter accuracy. The user does not need to carry any specialized hardware. Unlike acoustic approaches which use high-energy ultrasound waves, the use of barely audible (low energy) signals in our approach poses very different challenges. We discuss these challenges, how we addressed those, and experimental results on two prototypical implementations: a request-play-record localizer, and a continuous tracker. We evaluated our approach in a real world meeting room and report promising initial results with localization accuracy within half a meter 94% of the time. The system has been deployed in multiple zones of our office building and is now part of a location service in constant operation in our lab.
Detection of changes in images is a much discussed problem in a variety of disciplines, such as remote sensing, surveillance, medicine, civil infrastructure, etc. Fundamentally, two images captured at different time instances differ not only in the subject, but also in the conditions when the images were captured, namely, illumination, atmospheric absorption, sensor characteristics, noise, etc. A change detection algorithm must be tolerant enough to classify these changes as no-change, while keeping track of changes in the subject itself. The subject may appear, disappear, move, change its shape, or change its brightness or colour. In this work we model the spectral signals received from these surfaces at two different times as a linear function, resulting in amplification or attenuation of the dynamic range of the image. We estimate this shift in the dynamic range of the images using the Random Sample Consensus algorithm, and classify pixels not satisfying this shift as changes.
We present an approach to connect multiple remote environments for natural interaction among people and objects. Focus of current communication and telepresence systems severely restrict user affordances in terms of movement, interaction, peripheral vision, spatio-semantic integrity and even information flow. These systems allow information transfer rather than experiential interaction. We propose Environment-to-Environment (E2E) as a new paradigm for communication which allows users to interact in natural manner using text, audio, and video by connecting environments. Each Environment is instrumented using as many different types of sensors as may be required to detect presence and activity of objects and this object position and activity information is used to direct multimedia information to be sent to other Environments as well as present incoming multimedia information on right displays and speakers. The mediation for the appropriate data capture and presentation is done by a scalable event-based multimodal information system. This paper describes the design principles for E2E communication, discusses system architecture, and gives our experience in implementing prototypes of such systems in telemedicine and office collaboration applications. We also discuss the research challenges and a road-map for creating more sophisticated E2E applications in near future.
We present an approach to connect multiple remote environments for natural interaction among people and objects. Focus of current communication and telepresence systems severely restrict user affordances in terms of movement, interaction, peripheral vision, spatio-semantic integrity and even information flow. These systems allow information transfer rather than experiential interaction. We propose Environment-to-Environment (E2E) as a new paradigm for communication which allows users to interact in natural manner using text, audio, and video by connecting environments. Each Environment is instrumented using as many different types of sensors as may be required to detect presence and activity of objects and this object position and activity information is used to direct multimedia information to be sent to other Environments as well as present incoming multimedia information on right displays and speakers. The mediation for the appropriate data capture and presentation is done by a scalable event-based multimodal information system. This paper describes the design principles for E2E communication, discusses system architecture, and gives our experience in implementing prototypes of such systems in telemedicine and office collaboration applications. We also discuss the research challenges and a road-map for creating more sophisticated E2E applications in near future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.