There are about 900,000 people with Parkinson's disease (PD) in the United States. Even though there are benefits of early treatment, unfortunately, over 40% of individuals with PD over 65 years old do not see a neurologist. It is often very difficult for these individuals to get to a physician's office for diagnosis and subsequent monitoring. To address this problem, we present PARK, Parkinson's Analysis with Remote Kinetic-tasks. PARK instructs and guides users through six motor tasks and one audio task selected from the standardized MDS-UPDRS rating scale and records their performance via webcam. An initial experiment was conducted with 127 participants with PD and 127 age-matched controls, in which a total of 1,778 video recordings were collected. 90.6% of the PD participants agreed that PARK was easy to use, and 93.7% mentioned that they would use the system in the future. We explored objective differences between those with and without PD. A novel motion feature based on the Fast Fourier Transform (FFT) of optical flow in a region of interest was designed to quantify these differences in the collected video recordings. Additionally, we found that facial action unit AU4 (brow lowerer) was expressed significantly more often, while AU12 (lip corner puller) was expressed less often in various tasks for participants with PD.
Similar to how the smartphone and Internet have significantly changed our daily lives, artificial intelligence (AI) applications have started to profoundly affect our everyday lives as well. Two major products of this relatively recent trend are virtual assistants and home robots. They have similar functional characteristics: both interact with users through conversational agents and attempt to imitate human behavior. Home robots host a virtual assistant and have mechanical capabilities as well. There are many discussions about risks, challenges and the future vision associated with the proliferation of AI at the industrial level. These discussions, however, have not yet widely extended to the user level within the context of daily lives. In this article, we provide a review to discuss the benefits, risks, challenges, open questions and the future vision of using virtual assistants and social robots in daily lives.
Despite a revolution in the pervasiveness of video cameras in our daily lives, one of the most meaningful forms of nonverbal affective communication, interpersonal eye gaze, i.e. eye gaze relative to a conversation partner, is not available from common video. We introduce the Interpersonal-Calibrating Eye-gaze Encoder (ICE), which automatically extracts interpersonal gaze from video recordings without specialized hardware and without prior knowledge of participant locations. Leveraging the intuition that individuals spend a large portion of a conversation looking at each other enables the ICE dynamic clustering algorithm to extract interpersonal gaze. We validate ICE in both video chat using an objective metric with an infrared gaze tracker (F1=0.846, N=8), as well as in face-to-face communication with expert-rated evaluations of eye contact (r= 0.37, N=170). We then use ICE to analyze behavior in two different, yet important affective communication domains: interrogation-based deception detection, and communication skill assessment in speed dating. We find that honest witnesses break interpersonal gaze contact and look down more often than deceptive witnesses when answering questions (p=0.004, d=0.79). In predicting expert communication skill ratings in speed dating videos, we demonstrate that interpersonal gaze alone has more predictive power than facial expressions.
We developed an online framework that can automatically pair two crowd-sourced participants, prompt them to follow a research protocol, and record their audio and video on a remote server. The framework comprises two web applications: an Automatic Quality Gatekeeper for ensuring only high quality crowd-sourced participants are recruited for the study, and a Session Controller which directs participants to play a research protocol, such as an interrogation game. This framework was used to run a research study for analyzing facial expressions during honest and deceptive communication using a novel interrogation protocol. The protocol gathers two sets of nonverbal facial cues in participants: features expressed during questions relating to the interrogation topic and features expressed during control questions. The framework and protocol were used to gather 151 dyadic conversations (1.3 million video frames). Interrogators who were lied to expressed the smile-related lip corner puller cue more often than interrogators who were being told the truth, suggesting that facial cues from interrogators may be useful in evaluating the honesty of witnesses in some contexts. Overall, these results demonstrate that this framework is capable of gathering high quality data which can identify statistically significant results in a communication study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.