Abstract. Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferencing system for participation in hybrid meetings. The system has been developed as a research vehicle to see how technology based on automatic real-time recognition of conversational behavior in meetings can be used to improve engagement and floor control by remote participants. The system uses modules for online speech recognition, real-time visual focus of attention as well as a module that signals who is being addressed by the speaker. A built-in keyword spotter allows an automatic meeting assistant to call the remote participant's attention when a topic of interest is raised, pointing at the transcription of the fragment to help him catch-up.
This paper presents a virtual dancer that is able to dance to the beat of music coming in through the microphone and to motion beats detected in the video stream of a human dancer. In the current version its moves are generated from a lexicon that was derived manually from the analysis of the video clips of nine rap songs of different rappers. The system also allows for adaptation of the moves in the lexicon on the basis of style parameters.
Abstract. We study videoconferencing for meetings with some co-located participants and one remote participant. A standard Skype-like interface for the remote participant is compared to a more immersive 3D interface that conveys gaze directions in a natural way. Experimental results show the 3D interface is promising: all significant differences are in favor of 3D and according to the participants the 3D interface clearly supports selective gaze and selective listening. We found some significant differences in perceived quality of cooperation and organization, and on the opinions about other group members. No significant differences were found for perceived social presence of the remote participants, but we did measure differences in social presence for co-located participants. Measured gaze frequency and duration nor perceived turn-taking behavior did differ significantly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.