Good health is the result of a healthy lifestyle, where caring about physical activity and nutrition are key concerns. However, in today’s society, nutritional disorders are becoming increasingly frequent, affecting children, adults, and elderly people, mainly due to limited nutrition knowledge and the lack of a healthy lifestyle. A commonly adopted therapy to these imbalances is to monitor physical activity and daily habits, such as recording exercise or creating custom meal plans to count the amount of macronutrients and micronutrients acquired in each meal. Nowadays, many health tracking applications (HTA) have been developed that, for instance, record energy intake as well as users’ physiological parameters, or measure the physical activity during the day. However, most existing HTA do not have a uniform architectural design on top of which to build other applications and services. In this manuscript, we present system architecture intended to serve as a reference architecture for building HTA solutions. In order to validate the proposed architecture, we performed a preliminary evaluation with 15 well recognized experts in systems and software architecture from different entities around world and who have estimated that our proposal can generate architecture for HTA that is adequate, reliable, secure, modifiable, portable, functional, and with high conceptual integrity. In order to show the applicability of the architecture in different HTA, we developed two telemonitoring systems based on it, targeted to different tasks: nutritional coaching (Food4Living) and physical exercise coaching (TrainME). The purpose was to illustrate the kind of end-user monitoring applications that could be developed.
Nutrition e-coaches have demonstrated to be a successful tool to foster healthy eating habits, most of these systems are based on graphical user interfaces where users select the meals they have ingested from predefined lists and receive feedback on their diet. On one side the use of conversational interfaces based on natural language processing allows users to interact with the coach more easily and with fewer restrictions. However, on the other side natural language introduces more ambiguity, as instead of selecting the input from a predefined finite list of meals, the user can describe the ingests in many different ways that must be translated by the system into a tractable semantic representation from which to derive the nutritional aspects of interest. In this paper, we present a method that improves state-of-the-art approaches by means of the inclusion of nutritional semantic aspects at different stages during the natural language understanding processing of the user written or spoken input. The outcome generated is a rich nutritional interpretation of each user ingest that is independent of the modality used to interact with the coach.
Mental health and mental wellbeing have become an important factor to many citizens navigating their way through their environment and in the work place. New technology solutions such as chatbots are potential channels for supporting and coaching users to maintain a good state of mental wellbeing. Chatbots have the added value of providing social conversations and coaching 24/7 outside from conventional mental health services. However, little is known about the acceptability and user led requirements of this technology. This paper uses a living lab approach to elicit requirements, opinions and attitudes towards the use of chatbots for supporting mental health. The data collected was acquired from people living with anxiety or mild depression in a workshop setting. The audio of the workshop was recorded and a thematic analysis was carried out. The results are the co-created functional requirements and a number of use case scenarios that can be of interest to guide future development of chatbots in the mental health domain.
Currently there exist many tools that support monitoring and encouragement of healthy nutrition habits in the context of wellness promotion. In this domain, interfaces based on natural language provide more flexibility for nutritional self-reporting than traditional form-based applications, allowing the users to provide richer and spontaneous descriptions. Nonetheless, in certain circumstances, natural language records may miss some important aspects, such as the quantity of food eaten, which results in incomplete recordings. In the Internet-of-Things (IoT) paradigm, smart home appliances can support and complement the recording process so as to make it more accurate. However, in order to build systems that support the semantic analysis of nutritional self-reports, it is necessary to integrate multiple inter-related components, possibly within complex e-health platforms. For this reason, these components should be designed and encapsulated avoiding monolithic approaches that derive in rigidity and dependency of particular technologies. Currently, there are no models or architectures that serve as a reference for developers towards this objective. In this paper, we present a service-based architecture that helps to contrast and complement the descriptions of food intakes by means of connected smart home devices, coordinating all the stages during the process of recognizing food records provided in natural language. Additionally, we aim to identify and design the essential services that are required to automate the recording and subsequent processing of natural language descriptions of nutritional intakes in association with smart home devices. The functionalities provided by each of these services are ready to work in isolation, just out of the box, or in downstream pipeline processes, bypassing the inconveniences of monolithic architectures.
Devices with oral interfaces are enabling new interesting interaction scenarios and ways of interaction in ambient intelligence settings. The use of several of such devices in the same environment opens up the possibility to compare the inputs gathered from each one of them and perform a more accurate recognition and processing of user speech. However, the combination of multiple devices presents coordination challenges, as the processing of one voice signal by different speech processing units may result in conflicting outputs and it is necessary to decide which is the most reliable source. This paper presents an approach to rank several sources of spoken input in multi-device environments in order to give preference to the input with the highest estimated quality. The voice signals received by the multiple devices are assessed in terms of their calculated acoustic quality and the reliability of the speech recognition hypotheses produced. After this assessment, each input is assigned a unique score that allows the audio sources to be ranked so as to pick the best to be processed by the system. In order to validate this approach, we have performed an evaluation using a corpus of 4608 audios recorded in a two-room intelligent environment with 24 microphones. The experimental results show that our ranking approach makes it possible to successfully orchestrate an increasing number of acoustic inputs, obtaining better recognition rates than considering a single input, both in clear and noisy settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.