We have begun an exploration of how ubiquitous computing technology can facilitate different forms of audio communication within a family. We are interested in both intra-and inter-home communication. Though much technology exists to support this human-human communication, none of them make effective use of the context of the communication partners. In the Aware Home Research Initiative, we are exploring how to augment a domestic environment with knowledge of the location and activities of its occupants. The Family Intercom project is trying to explore how this context can be used to create a variety of lightweight communication opportunities between collocated and remote family members. It is particularly important that context about the status of the callee be communicated to the caller, so that the appropriate social protocol for continuing a conversation can be performed by the caller. In this paper, we will discuss our initial prototypes to develop a testbed for exploring these context-aware audio communication services.
We present a multi-camera vision-based eye tracking method to robustly locate and track user's eyes as they interact with an application. We propose enhancements to various visionbased eye-tracking approaches, which include (a) the use of multiple cameras to estimate head pose and increase coverage of the sensors and (b) the use of probabilistic measures incorporating Fisher's linear discriminant to robustly track the eyes under varying lighting conditions in real-time. We present experiments and quantitative results to demonstrate the robustness of our eye tracking in two application prototypes.
In this paper, we explore the learning that occurred in two types of collaborative learning environments in a seventh grade life sciences classroom: an intra-group environment and an intergroup environment. Students used both types of collaboration tools, each tuned to the needs of the task they were doing within or across groups. We found that the learning outcomes in the two collaborative settings were different. During the intragroup collaboration, students focused more on the structure and behavior of the designs. The inter-group environment on the other hand, led them to discuss the function/s of their models, ask for and provide justifications for the functions. We discuss the results and suggest integration of the inter and intra group tools.
We present a multi-camera vision-based eye tracking method to robustly locate and track user's eyes as they interact with an application. We propose enhancements to various visionbased eye-tracking approaches, which include (a) the use of multiple cameras to estimate head pose and increase coverage of the sensors and (b) the use of probabilistic measures incorporating Fisher's linear discriminant to robustly track the eyes under varying lighting conditions in real-time. We present experiments and quantitative results to demonstrate the robustness of our eye tracking in two application prototypes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.