The capture of lectures or similar presentations is of interest for several reasons. From the attendee's perspective, students may use the recordings when working on homework assignments or preparing for exams, or to watch the contents of a missed class. From the instructor's perspective, a captured lecture may be evaluated, recaptured for improvements, or reused as complementary learning material. Moreover, captured lectures may be a valuable resource for e-learning and distance education courses. In this paper we detail the design rationale associated with the development of a prototype platform for the ubiquitous capture of live presentations and their transformation into a corresponding interactive multi-video object. Our approach includes capturing important context information which, when incorporated into the multimedia object, enables one to interact with the recorded lecture in novel dimensions. We tested our prototype by using case studies involving instructors and students, which allowed us to identify important features and novel uses for the platform.
In most current digital TV applications the user interaction takes place by pressing keys on a remote control. For simple applications this type of interaction is sufficienthowever, as interactive applications become more popular new input devices are demanded. After discussing motivating scenarios, this paper presents an architecture that offers to applications running on a set-top-box the possibility of receiving multimodal data (audio, video, image, ink, accelerometer, text, voice and customized data) from multiple devices (such as mobile phones, PDAs, tablet PCs, notebooks or even desktops). We validated the architecture by implementing a corresponding multimodal interaction component which extends the Brazilian Digital TV middleware, and by building applications which use the component.
The problem of allowing user-centric control within multimedia presentations is important to document engineering when the presentations are specified as structured multimedia documents. In this paper we investigate the problem in the context of end-user "real-time" editing of interactive video programs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.