Many authoring tools let authors create scenarios, but very few let them create an active multimedia scenario that will not only play itself back, but will change course dynamically, depending on user interactions. Our temporal model provides a new way to represent asynchronous and synchronous temporal events, allowing authors to create scenarios that offer viewers seamless, transparent options.hen we think of documents, we usually think of books. Authoring a book requires writ-W ing a linear storyline that describes events that happen in time. Creating a multimedia presentation is considerably more complex. Unlike a book, which is unimedia and linear, multimedia presentations often contain media that must occur simultaneously or in some related way, and the author must specify all these relations. For instance, a book, which consists of words strung together, is most often read from front to back. Multimedia presentations contain many other media types, such as audio and video, that can be rendered in parallel. Further, the presentation may be interactive and subtly different at each runtime. We need new document models and methods of representing temporal relations in multimedia documents.One of the main issues in temporal models of scenarios is the model's degree of flexibility in expressing different temporal relationships. In this article, we are mainly concerned with the temporal behavior of scenarios, not the other attributes of the document such as its layout, quality, and playback speed. We studied how to represent and model multimedia scenarios (fully specified temporal entities involving multiple media) that "play themselves back"-that is, multiple media are rendered automatically before the users' eyes-while letting the user interact with the running presentation, "driving" it in a custom direction. We provide a new representation for asynchronous and synchronous temporal events that lets authors create scenarios offering viewers non-halting, transparent options. A temporal model for active multimediaPerhaps the most prevalent temporal model is the timeline,' which aligns all events (see "Definitions" sidebar) on a single axis representing time. Since the events all appear in the order in which they should be presented, exactly one of the basic point relations-"before" (<), "after" (>), or "simultaneous to" (=)-holds between any pair of events on a single timeline (Figure 1). The timeline model, though simple and graphical, lacks the flexibility to represent relations that are determined interactively, such as at runtime. For example, assume a graphic (say a mathematical graph) is to be rendered on the screen only until a user action (say a mouse selection) dictates that the next one should be rendered. The start time of the graphic is known at the time of authoring. However, the end time of the graphic depends on the user action and cannot be known until presentation time. Hence a traditional timeline, which requires a total specification of all temporal relations between media objects, cannot repres...
This paper presents MediAlly, a middleware for supporting energy-efficient, long-term remote health monitor ing. Data is collected using physiological sensors and trans ported back to the middleware using a smart phone. The key to MediAlly's energy efficient operations lies in the adoption of an Activity Triggered Deep Monitoring (ATDM) paradigm,where data collection episodes are triggered only when the subject is determined to possess a specified context. MediAlly supports the on-demand collection of contextual provenance using a novel low-overhead provenance collection sub-system.The behaviour of this sub-system is configured using an application-defined context composition graph. The resulting provenance stream provides valuable insight while interpreting the 'episodic' sensor data streams. The paper also describes our prototype implementation of MediAlly using commercially available devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.