No abstract
Delivering effortless interactions and appropriate interventions through pervasive systems requires making sense of multiple streams of sensor data. This is particularly challenging when these concern people’s natural behaviours in the real world. This paper takes a multidisciplinary perspective of annotation and draws on an exploratory study of 12 people, who were encouraged to use a multi-modal annotation app while living in a prototype smart home. Analysis of the app usage data and of semi-structured interviews with the participants revealed strengths and limitations regarding self-annotation in a naturalistic context. Handing control of the annotation process to research participants enabled them to reason about their own data, while generating accounts that were appropriate and acceptable to them. Self-annotation provided participants an opportunity to reflect on themselves and their routines, but it was also a means to express themselves freely and sometimes even a backchannel to communicate playfully with the researchers. However, self-annotation may not be an effective way to capture accurate start and finish times for activities, or location associated with activity information. This paper offers new insights and recommendations for the design of self-annotation tools for deployment in the real world.
Capturing meal images using mobile phone cameras is a promising alternative to traditional dietary assessment methods. Acquiring photos is reasonably simple but nutritional content analysis of images is a major challenge. Automated food identification and portion size assessment is computationally and participant intensive; relying on participant feedback for accuracy (1) . Dietitian analysis of photos is accurate but time-consuming and expensive (2) . Crowdsourcing could offer a rapid low-cost alternative by utilising the life-long experience that all humans have in food identification. Previous crowdsourcing methods include the Eatery app, which produces a simple 11-point 'healthiness' scale for each meal (3) and the PlateMate system, which creates a list of all individual foods with portion sizes, energy and macronutrient content (4) . While the Eatery produces limited and subjective data on meal content, PlateMate represents a complex integrated system of multiple tasks requiring on average 25 workers, costing £2·75 and taking 90 min per image. For feasible data-capture in a large-scale longitudinal studies, crowdsourcing data from meal photos needs to be cheaper and quicker. We aimed to develop a simpler task and tested it's feasibility for crowdsourcing dietary data.FoodFinder, a single task for identifying food groups and portion sizes, developed using Qualtrics (www.qualtrics.com/), and linked to the Prolific Academic (https://prolific.ac/) crowdsourcing platform for recruitment and reimbursement of a UK crowd. Thirty meal photos with measured total meal weight (grams) were analysed by a dietitian and crowds ranging in size from 5 to 50 people. The difference between actual meal weight (the gold-standard) and total meal weight estimated by different sized crowds and ratings by a dietician were compared to each other. To establish group consensus crowd estimates were weighted by majority agreement (5) . Bland-Altman analysis assessed agreement with actual meal weight.A crowd of 5 people underestimated true meal weight by 63 g, equating to 15 % of actual meal weight with limits of agreement (LOA) from −299 to 174 g. In comparison experts overestimated by 28 g equating to 9 % of actual meal weight with LOA −158, 214 g. With a crowd of 5 people, crowdsourcing cost £3·35 and took a mean 2 mins 55 sec (SD 2 min 6 sec) per image. A crowd of 50 had similar accuracy and limits of agreement (−65 g LOA −278, 149 g) but was more expensive. Further development of FoodFinder is required to make rapid low-cost analysis of meal photos via crowdsourcing a feasible method for assessing diet.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.