In this article, we address the problem of creating a smart audio guide that adapts to the actions and interests of museum visitors. As an autonomous agent, our guide perceives the context and is able to interact with users in an appropriate fashion. To do so, it understands what the visitor is looking at, if the visitor is moving inside the museum hall, or if he or she is talking with a friend. The guide performs automatic recognition of artworks, and it provides configurable interface features to improve the user experience and the fruition of multimedia materials through semi-automatic interaction.
Our smart audio guide is backed by a computer vision system capable of working in real time on a mobile device, coupled with audio and motion sensors. We propose the use of a compact Convolutional Neural Network (CNN) that performs object classification and localization. Using the same CNN features computed for these tasks, we perform also robust artwork recognition. To improve the recognition accuracy, we perform additional video processing using shape-based filtering, artwork tracking, and temporal filtering. The system has been deployed on an NVIDIA Jetson TK1 and a NVIDIA Shield Tablet K1 and tested in a real-world environment (Bargello Museum of Florence).
Serious games have been widely exploited in medicine training and rehabilitations. Although many medical simulators exist with the aim to train personal skills of medical operators, only few of them take into account cooperation between team members. After the introduction of the Surgical Safety Checklist by the World Health Organization (WHO), that has to be carried out by surgical team members, several studies have proved that the adoption of this procedure can remarkably reduce the risk of surgical crisis. In this paper we introduce a natural interface featuring an interactive virtual environment that aims to train medical professionals in following security procedures proposed by the WHO adopting a 'serious game' approach. The system presents a realistic and immersive 3D interface and allows multiple users to interact using vocal input and hand gestures. Natural interactions between users and the simulator are obtained exploiting the Microsoft Kinect TM sensor. The game can be seen as a role play game in which every trainee has to perform the correct steps of the checklist accordingly to his/her professional role in the medical team.
In this paper we present the prototype system that will be used in the RIMSI project for the simulation and training of medical and para-medical personnel in emergency medicine. The use of immersive simulations in medical training is extremely useful to confront emergency operators with scenarios that range from usual (e.g. unconscious person on the ground) to extreme (car accident with several injured people) without posing the simulation participants in any harm. It is critical to exploit 3D virtual worlds in order to provide as much contextual information as possible to the operators. In fact each emergency procedure needs to be adapted depending on the environmental threats and the presence of multiple injured people in need of assistance or bystanders. The presented prototype will simulate virtual first aid scenarios with interactive 3D graphics. Users will interact with a gesture based interface based on Kinect TM . The interface will improve the immersive capability of the system and provide a natural interface for navigation and interaction with the virtual environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.