A smart house is a complex system, and configuring it to act as desired is difficult and error prone. In this paper we extend a previously developed framework based on timed automata for designing safe and reliable home automation scenarios to make it easier to use. To do so we abstract it with an Event-Condition-Action language to create intelligent scenarios, and constraints that prevent scenarios with undesirable behaviors to be applied. This language is itself abstracted by a graphical user interface that enables the creation of scenarios by manipulating graphical blocks representing elements of the language. We have designed and implemented a prototype system to test our approach, and we report on a qualitative user study that was conducted.
Video communication systems traditionally offer limited or no experience of eye contact due to the offset between cameras and the screen. In response, we are experimenting with the use of multiple Kinect cameras for generating a 3D model of the user, and then rendering a virtual camera angle giving the user an experience of eye contact. In doing this, we use concepts from KinectFusion, such as a volumetric voxel data representation and GPU accelerated ray tracing for viewpoint rendering. This achieves a detailed 3D model from a noisy source, and delivers a promising video output in terms of visual quality, lag and frame rate, enabling the experience of eye contact and face gaze.
Traditional video communication systems offer a very limited experience of eye contact due to the offset between cameras and the screen. In response, we present EyeGaze, which uses multiple Kinect cameras to generate a 3D model of the user, and then renders a virtual camera angle giving the user an experience of eye contact. As a novel approach, we use concepts from KinectFusion, such as a volumetric voxel data representation and GPU accelerated ray tracing for viewpoint rendering. This achieves detail from a noisy source, and allows the real-time video output to be a composite of old and new data. We frame our work in literature on eye contact and previous approaches to supporting it over video. We then describe EyeGaze, and an empirical study comparing it with communication face-to-face or over traditional video. The study shows that while face-to-face is still superior, EyeGaze has added value over traditional video in terms of eye contact, involvement, turn-taking and co-presence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.