Mobile health (mHealth) interventions hold the promise of augmenting existing health promotion interventions. Older adults present unique challenges in advancing new models of health promotion using technology including sensory limitations and less experience with mHealth, underscoring the need for specialized usability testing. We use an open-source mHealth device as a case example for its integration in a newly designed health services intervention. We performed a convergent, parallel mixed-methods study including semi-structured interviews, focus groups, and questionnaires, using purposive sampling of 29 older adults, 4 community leaders and 7 clinicians in a rural setting We transcribed the data, developed codes informed by thematic analysis using inductive and deductive methods, and assessed the quantitative data using descriptive statistics. Our results suggest the importance of end-users in user-centered design of mHealth devices and that aesthetics are critically important. The prototype could potentially be feasibly integrated within health behavior interventions. Centralized dashboards were desired by all participants and ecological momentary assessment could be an important part of monitoring. Concerns of mHealth, including the prototype device, include the device’s accuracy, its intrusiveness in daily life and privacy. Formative evaluations are critically important prior to deploying large-scale interventions.
How romantic partners interact with each other during a conflict influences how they feel at the end of the interaction and is predictive of whether the partners stay together in the long term. Hence understanding the emotions of each partner is important. Yet current approaches that are used include self-reports which are burdensome and hence limit the frequency of this data collection. Automatic emotion prediction could address this challenge. Insights from psychology research indicate that partners' behaviors influence each other's emotions in conflict interaction and hence, the behavior of both partners could be considered to better predict each partner's emotion. However, it is yet to be investigated how doing so compares to only using each partner's own behavior in terms of emotion prediction performance. In this work, we used BERT to extract linguistic features (i.e., what partners said) and openSMILE to extract paralinguistic features (i.e., how they said it) from a data set of 368 German-speaking Swiss couples (N = 736 individuals) who were videotaped during an 8-minutes conflict interaction in the laboratory. Based on those features, we trained machine learning models to predict if partners feel positive or negative after the conflict interaction. Our results show that including the behavior of the other partner improves the prediction performance. Furthermore, for men, considering how their female partners spoke is most important and for women considering what their male partner said is most important in getting better prediction performance. This work is a step towards automatically recognizing each partners' emotion based on the behavior of both, which would enable a better understanding of couples in research, therapy, and the real world.
Many processes in psychology are complex, such as dyadic interactions between two interacting partners (e.g., patient-therapist, intimate relationship partners). Nevertheless, many basic questions about interactions are difficult to investigate because dyadic processes can be within a person and between partners, they are based on multimodal aspects of behavior and unfold rapidly. Current analyses are mainly based on the behavioral coding method, whereby human coders annotate behavior based on a coding schema. But coding is labor-intensive, expensive, slow, focuses on few modalities, and produces sparse data which has forced the field to use average behaviors across entire interactions, thereby undermining the ability to study processes on a fine-grained scale. Current approaches in psychology use LIWC for analyzing couples' interactions. However, advances in natural language processing such as BERT could enable the development of systems to potentially automate behavioral coding, which in turn could substantially improve psychological research. In this work, we train machine learning models to automatically predict positive and negative communication behavioral codes of 368 German-speaking Swiss couples during an 8-minute conflict interaction on a fine-grained scale (10-seconds sequences) using linguistic features and paralinguistic features derived with openSMILE. Our results show that both simpler TF-IDF features as well as more complex BERT features performed better
Extensive couples' literature shows that how couples feel after a conflict is predicted by certain emotional aspects of that conversation. Understanding the emotions of couples leads to a better understanding of partners' mental well-being and consequently their relationships. Hence, automatic emotion recognition among couples could potentially guide interventions to help couples improve their emotional well-being and their relationships. It has been shown that people's global emotional judgment after an experience is strongly influenced by the emotional extremes and ending of that experience, known as the peak-end rule. In this work, we leveraged this theory and used machine learning to investigate, which audio segments can be used to best predict the end-of-conversation emotions of couples. We used speech data collected from 101 Dutchspeaking couples in Belgium who engaged in 10-minute long conversations in the lab. We extracted acoustic features from (1) the audio segments with the most extreme positive and negative ratings, and (2) the ending of the audio. We used transfer learning in which we extracted these acoustic features with a pre-trained convolutional neural network (YAMNet). We then used these features to train machine learning models -support vector machines -to predict the end-of-conversation valence ratings (positive vs negative) of each partner. The results of this work could inform how to best recognize the emotions of couples after conversationsessions and eventually, lead to a better understanding of couples' relationships either in therapy or in everyday life.
BackgroundType II diabetes mellitus (T2DM) is a common chronic disease. To manage blood glucose levels, patients need to follow medical recommendations for healthy eating, physical activity, and medication adherence in their everyday life. Illness management is mainly shared with partners and involves social support and common dyadic coping (CDC). Social support and CDC have been identified as having implications for people’s health behavior and well-being. Visible support, however, may also be negatively related to people’s well-being. Thus, the concept of invisible support was introduced. It is unknown which of these concepts (ie, visible support, invisible support, and CDC) displays the most beneficial associations with health behavior and well-being when considered together in the context of illness management in couple’s everyday life. Therefore, a novel ambulatory assessment application for the open-source behavioral intervention platform MobileCoach (AAMC) was developed. It uses objective sensor data in combination with self-reports in couple’s everyday life.ObjectiveThe aim of this paper is to describe the design of the Dyadic Management of Diabetes (DyMand) study, funded by the Swiss National Science Foundation (CR12I1_166348/1). The study was approved by the cantonal ethics committee of the Canton of Zurich, Switzerland (Req-2017_00430).MethodsThis study follows an intensive longitudinal design with 2 phases of data collection. The first phase is a naturalistic observation phase of couples’ conversations in combination with experience sampling in their daily lives, with plans to follow 180 T2DM patients and their partners using sensor data from smartwatches, mobile phones, and accelerometers for 7 consecutive days. The second phase is an observational study in the laboratory, where couples discuss topics related to their diabetes management. The second phase complements the first phase by focusing on the assessment of a full discussion about diabetes-related concerns. Participants are heterosexual couples with 1 partner having a diagnosis of T2DM.ResultsThe AAMC was designed and built until the end of 2018 and internally tested in March 2019. In May 2019, the enrollment of the pilot phase began. The data collection of the DyMand study will begin in September 2019, and analysis and presentation of results will be available in 2021.ConclusionsFor further research and practice, it is crucial to identify the impact of social support and CDC on couples’ dyadic management of T2DM and their well-being in daily life. Using AAMC will make a key contribution with regard to objective operationalizations of visible and invisible support, CDC, physical activity, and well-being. Findings will provide a sound basis for theory- and evidence-based development of dyadic interventions to change health behavior in the context of couple’s dyadic illness management. Challenges to this multimodal sensor approach and its feasibility aspects are discussed.International Registered Report Identifier (IRRID)PRR1-10.2196/13685
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.