Smart phones are conquering the mobile phone market; they are not just phones they also act as media players, gaming consoles, personal calendars, storage, etc. They are portable computers with fewer computing capabilities than personal computers. However unlike personal computers users can carry their smartphone with them at all times. The ubiquity of mobile phones and their computing capabilities provide an opportunity of using them as a life logging device. Life-logs (personal e-memories) are used to record users' daily life events and assist them in memory augmentation. In a more technical sense, life-logs sense and store users' contextual information from their environment through sensors, which are core components of life-logs. Spatiotemporal aggregation of sensor information can be mapped to users' life events. We propose UbiqLog, a lightweight, configurable and extendable life-log framework that uses mobile phone as a device for life logging. The proposed framework extends pre- vious research in this field, which investigated mobile phones as life-log tool through continuous sensing. Its openness in terms of sensor configuration allows developers to create flexible, multipurpose life-log tools. In addition to that this framework contains a data model and an architecture, which can be used as reference model for further life-log development, including its extension to other devices, such as ebook readers, T.V.s, etc.
As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.
This paper reports on a between-subject, comparative online study of three information visualization demonstrators that each displayed the same dataset by way of an identical scatterplot technique, yet were different in style in terms of visual and interactive embellishment. We validated stylistic adherence and integrity through a separate experiment in which a small cohort of participants assigned our three demonstrators to predefined groups of stylistic examples, after which they described the styles with their own words. From the online study, we discovered significant differences in how participants execute specific interaction operations, and the types of insights that followed from them. However, in spite of significant differences in apparent usability, enjoyability and usefulness between the style demonstrators, no variation was found on the self-reported depth, expert-rated depth, confidence or difficulty of the resulting insights. Three different methods of insight analysis have been applied, revealing how style impacts the creation of insights, ranging from higher-level pattern seeking to a more reflective and interpretative engagement with content, which is what underlies the patterns. As this study only forms the first step in determining how the impact of style in information visualization could be best evaluated, we propose several guidelines and tips on how to gather, compare and categorize insights through an online evaluation study, particularly in terms of analyzing the concise, yet wide variety of insights and observations in a trustworthy and reproducable manner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.