with the device secured to them, eliminating the portability constraint of traditional US units.Depending on the parameters chosen (refer to Table 2), the unit may provide treatment for up to 5 to 6 consecutive hours without needing to charge the unit battery. 3 Early studies suggest that this portable unit can deliver lower-intensity acoustic energy over a prolonged period of time (hours), as well as medium-intensity treatments over a shorter period of time (minutes), achieving the same temperature increase and pain relief that comes from traditional US units, in a more versatile and patient-friendly manner.
Abstract-This paper presents the architecture of a Reflective Middleware for Acoustic Management that will improve the interaction between users and agents in Intelligent Environments, using the Multiagent System paradigm. The middleware manages the acoustic services, from the identification and recognition of the acoustic signals, until the interpretation, processing and analysis of them. In this paper are detailed the architecture bases, the middleware components, and case studies where this Middleware can be used. Acoustic signals and vibrations will generate the connection between agents, but it will also include the sounds that are provided by the users of the Intelligent Environment, making it an interactive and immersive experience.
Purpose The Reflective Middleware for Acoustic Management (ReM-AM), based on the Middleware for Cloud Learning Environments (AmICL), aims to improve the interaction between users and agents in a Smart Environment (SE) using acoustic services, in order to consider the unpredictable situations due to the sounds and vibrations. The middleware allows observing, analyzing, modifying and interacting in every state of a SE from the acoustics. This work details an extension of the ReM-AM using the ontology-driven architecture (ODA) paradigm for acoustic management.Design/methodology/approach This work details an extension of the ReM-AM using the ontology-driven architecture (ODA) paradigm for acoustic management. In this paper are defined the different domains of knowledge required for the management of the sounds in SEs, which are modeled using ontologies.Findings This work proposes an acoustics and sound ontology, a service-oriented architecture (SOA) ontology, and a data analytics and autonomic computing ontology, which work together. Finally, the paper presents three case studies in the context of smart workplace (SWP), ambient-assisted living (AAL) and Smart Cities (SC).Research limitations/implications Future works will be based on the development of algorithms for classification and analysis of sound events, to help with emotion recognition not only from speech but also from random and separate sound events. Also, other works will be about the definition of the implementation requirements, and the definition of the real context modeling requirements to develop a real prototype.Practical implications In the case studies is possible to observe the flexibility that the ReM-AM middleware based on the ODA paradigm has by being aware of different contexts and acquire information of each, using this information to adapt itself to the environment and improve it using the autonomic cycles. To achieve this, the middleware integrates the classes and relations in its ontologies naturally in the autonomic cycles.Originality/value The main contribution of this work is the description of the ontologies required for future works about acoustic management in SE, considering that what has been studied by other works is the utilization of ontologies for sound event recognition but not have been expanded like knowledge source in an SE middleware. Specifically, this paper presents the theoretical framework of this work composed of the AmICL middleware, ReM-AM middleware and the ODA paradigm.
The occupancy and activity estimation are fields that have been severally researched in the past few years. However, the different techniques used include a mixture of atmospheric features such as humidity and temperature, many devices such as cameras and audio sensors, or they are limited to speech recognition. In this work is proposed that the occupancy and activity can be estimated only from the audio information using an automatic approach of audio feature engineering to extract, analyze and select descriptors/variables. This scheme of extraction of audio descriptors is used to determine the occupation and activity in specific smart environments, such that our approach can differentiate between academic, administrative or commercial environments. Our approach from the audio feature engineering is compared to previous similar works on occupancy estimation and/or activity estimation in smart buildings (most of them including other features, such as atmospherics and visuals). In general, the results obtained are very encouraging compared to previous studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.