Social networks are the persons surrounding a patient who provide support, circulate information, and influence health behaviors. For patients seen by neurologists, social networks are one of the most proximate social determinants of health that are actually accessible to clinicians, compared with wider social forces such as structural inequalities. We can measure social networks and related phenomena of social connection using a growing set of scalable and quantitative tools increasing familiarity with social network effects and mechanisms. This scientific approach is built on decades of neurobiological and psychological research highlighting the impact of the social environment on physical and mental well-being, nervous system structure, and neuro-recovery. Here, we review the biology and psychology of social networks, assessment methods including novel social sensors, and the design of network interventions and social therapeutics.
UNSTRUCTURED Although traditional methods of data collection in naturalistic settings can shed light on constructs of interest to researchers, advances in sensor-based technology allow researchers to capture continuous physiological and behavioral data to provide a more comprehensive understanding of the constructs that are examined in a dynamic health care setting. This study gives examples for implementing technology-facilitated approaches and provides the following recommendations for conducting such longitudinal, sensor-based research, with both environmental and wearable sensors in a health care setting: pilot test sensors and software early and often; build trust with key stakeholders and with potential participants who may be wary of sensor-based data collection and concerned about privacy; generate excitement for novel, new technology during recruitment; monitor incoming sensor data to troubleshoot sensor issues; and consider the logistical constraints of sensor-based research. The study describes how these recommendations were successfully implemented by providing examples from a large-scale, longitudinal, sensor-based study of hospital employees at a large hospital in California. The knowledge gained from this study may be helpful to researchers interested in obtaining dynamic, longitudinal sensor data from both wearable and environmental sensors in a health care setting (eg, a hospital) to obtain a more comprehensive understanding of constructs of interest in an ecologically valid, secure, and efficient way.
Active speaker detection in videos addresses associating a source face, visible in the video frames, with the underlying speech in the audio modality. The two primary sources of information to derive such a speech-face relationship are i) visual activity and its interaction with the speech signal and ii) cooccurrences of speakers' identities across modalities in the form of face and speech. The two approaches have their limitations: the audio-visual activity models get confused with other frequently occurring vocal activities, such as laughing and chewing, while the speakers' identity-based methods are limited to videos having enough disambiguating information to establish a speech-face association. Since the two approaches are independent, we investigate their complementary nature in this work. We propose a novel unsupervised framework to guide the speakers' cross-modal identity association with the audio-visual activity for active speaker detection. Through experiments on entertainment media videos from two benchmark datasets--the AVA active speaker (movies) and Visual Person Clustering Dataset (TV shows)--we show that a simple late fusion of the two approaches enhances the active speaker detection performance.
Physiological linkage refers to moment-to-moment, time-linked coordination in physiological responses among people in close relationships. Although people in romantic relationships have been shown to evidence linkage in their physiological responses over time, it is still unclear how patterns of covariation relate to in-the-moment, as well as general levels of, relationship functioning. In the present study with data collected between 2014 and 2017, we capture linkage in electrodermal activity (EDA) in a diverse sample of young-adult couples, generally representative and generalizable to the Los Angeles community from which we sampled. We test how naturally occurring, shifting feelings of closeness with and annoyance toward one's partner relate to concurrent changes in levels of physiological linkage over the course of 1 day. Additionally, we examine how linkage relates to overall relationship satisfaction. Results showed that couples evidenced significant covariation in their levels of physiological arousal in daily life. Further, physiological linkage increased during hours that participants felt close to their romantic partners but not during hours that participants felt annoyed with their partners. Finally, those participants with overall higher levels of relationship satisfaction showed lower levels of linkage over the day of data collection. These findings highlight how individuals respond in sync with their romantic partners and how this process ebbs and flows in conjunction with the shifting emotional tone of their relationships. The discussion focuses on how linkage might enhance closeness or, alternatively, contribute to conflict escalation and the potential of linkage processes to promote positive interpersonal relationships.
Computational machine intelligence approaches have enabled a variety of music-centric technologies in support of creating, sharing and interacting with music content. A strong performance on specific downstream application tasks, such as music genre detection and music emotion recognition, is paramount to ensuring broad capabilities for computational music understanding and Music Information Retrieval (MIR). Traditional approaches have relied on supervised learning to train models to support these music-related tasks. However, such approaches require copious annotated data and still may only provide insight into one view of music—namely, that related to the specific task at hand. We present a new model for supporting music understanding that leverages self-supervision and cross-domain learning. After pre-training using masked reconstruction and self-attention bi-directional transformers, the model is fine-tuned using several downstream music understanding tasks. The results show that our multi-modal, multi-task, music transformer model, which we call M3BERT, generates features that result in better performance on several music-related tasks, indicating the potential of self-supervised and semi-supervised learning approaches toward a more generalized and robust computational approach to modeling music. Our work can offer a starting point for many music-related modeling tasks, with potential applications in learning deep representations and enabling robust end technology applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.