In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passersby to communicate its interactivity. The lab study reveals:(1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approximately 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.
MPML3D is our first candidate of the next generation of authoring languages aimed at supporting digital content creators in providing highly appealing and highly interactive content with little effort. The language is based on our previously developed family of Multimodal Presentation Markup Languages (MPML) that broadly followed the "sequential" and "parallel" tagging structure scheme for generating presynchronized presentations featuring life-like characters and interactions with the user. The new markup language MPML3D deviates from this design framework and proposes a reactive model instead, which is apt to handle interaction-rich scenarios with highly realistic 3D characters. Interaction in previous versions of MPML could be handled only at the cost of considerable scripting effort due to branching. By contrast, MPML3D advocates a reactive model that allows perceptions of other characters or the user interfere with the presentation flow at any time, and thus facilitates natural and unrestricted interaction. MPML3D is designed as a powerful and flexible language that is easy-to-use by non-experts, but it is also extensible as it allows content creators to add functionality such as a narrative model by using popular scripting languages.
In this paper, we present a game-like scenario that is based on a model of social group dynamics inspired by theories from the social sciences. The model is augmented by a model of proxemics that simulates the role of distance and spatial orientation in human-human communication. By means of proxemics, a group of human participants may signal other humans whether they welcome new group members to join or not. In this paper, we describe the results of an experiment we conducted to shed light on the question of how humans respond to such cues when shown by virtual humans.
Calibration of camera networks is a well studied problem. However, most previous attempts assume all the cameras in the network to be synchronized which is especially difficult over large distances. In this paper we present a simple method to fully calibrate the asynchronized cameras of differing frame rates from the acquired video content directly.The presented methods either utilize content based tracked features or alternatively a light marker together with epipolar or homography based constraints to estimate the synchronization as well as intrinsic and extrinsic camera parameters. We assume two cameras within the network to be pre-calibrated (intrinsics only) using standard approaches. We validate our method with numerous simulations for noise analysis as well as real experiments. Furthermore we show how our approach can be used for robust 3D reconstruction in spite of using asynchronized cameras.
In this paper, we introduce a direct manipulation tabletop multi-touch user interface for spatial audio scenes. Although spatial audio rendering existed for several decades now, mass market applications have not been developed and the user interfaces still address a small group of expert users. We implemented an easy-to-use direct manipulation interface for multiple users, taking full advantage of the object-based audio rendering mode. Two versions of the user interface have been developed to explore variations in information architecture and will be evaluated in user tests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.