This work presents an Indoor Positioning System to estimate the location of people navigating in complex indoor environments. The developed technique combines WiFi Positioning Systems and depth maps, delivering promising results in complex inhabited environments, consisting of various connected rooms, where people are freely moving. This is a non-intrusive system in which personal information about subjects is not needed and, although RGB-D cameras are installed in the sensing area, users are only required to carry their smart-phones. In this article, the methods developed to combine the above-mentioned technologies and the experiments performed to test the system are detailed. The obtained results show a significant improvement in terms of accuracy and performance with respect to previous WiFi-based solutions as well as an extension in the range of operation.
This work presents a newIndoor Positioning System(IPS) based on the combination ofWiFi Positioning System(WPS) anddepth maps, for estimating the location of people. The combination of both technologies improves the efficiency of existing methods, based uniquely on wireless positioning techniques. While other positioning systems force users to wear special devices, the system proposed in this paper just requires the use ofsmartphones, besides the installation of RGB-D sensors in the sensing area. Furthermore, the system is not intrusive, being not necessary to know people’s identity. The paper exposes the method developed for putting together and exploiting both types of sensory information with positioning purposes: the measurements of the level of the signal received from different access points (APs) of the wireless network and thedepth mapsprovided by the RGB-D cameras. The obtained results show a significant improvement in terms of positioning with respect to common WiFi-based systems.
When there is an interaction between a robot and a person, gaze control is very important for face-to-face communication. However, when a robot interacts with several people, neurorobotics plays an important role to determine the person to look at and those to pay attention to among the others. There are several factors which can influence the decision: who is speaking, who he/she is speaking to, where people are looking, if the user wants to attract attention, etc. This article presents a novel method to decide who to pay attention to when a robot interacts with several people. The proposed method is based on a competitive network that receives different stimuli (look, speak, pose, hoard conversation, habituation, etc.) that compete with each other to decide who to pay attention to. The dynamic nature of this neural network allows a smooth transition in the focus of attention to a significant change in stimuli. A conversation is created between different participants, replicating human behavior in the robot. The method deals with the problem of several interlocutors appearing and disappearing from the visual field of the robot. A robotic head has been designed and built and a virtual agent projected on the robot's face display has been integrated with the gaze control. Different experiments have been carried out with that robotic head integrated into a ROS architecture model. The work presents the analysis of the method, how the system has been integrated with the robotic head and the experiments and results obtained.
In recent years, Artificial Intelligence Technologies (AIT) have been developed to improve the quality of life of the elderly and their safety in the home. This work focuses on developing a system capable of recognising the most usual activities in the daily life of an elderly person in real-time to enable a specialist to monitor the habits of this person, such as taking medication or eating the correct meals of the day. To this end, a prediction model has been developed based on recurrent neural networks, specifically on bidirectional LSTM networks, to obtain in real-time the activity being carried out by the individuals in their homes, based on the information provided by a set of different sensors installed at each person’s home. The prediction model developed in this paper provides a 95.42% accuracy rate, improving the results of similar models currently in use. In order to obtain a reliable model with a high accuracy rate, a series of processing and filtering processes have been carried out on the data, such as a method based on a sliding window or a stacking and re-ordering algorithm, that are subsequently used to train the neural network, obtained from the public database CASAS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.