To facilitate the broader use of EMG signal whitening, we studied four whitening procedures of various complexities, as well as the roles of sampling rate and noise correction. We separately analyzed force-varying and constant-force contractions from 64 subjects who completed constant-posture tasks about the elbow over a range of forces from 0% to 50% maximum voluntary contraction (MVC). From the constant-force tasks, we found that noise correction via the root difference of squares (RDS) method consistently reduced EMG recording noise, often by a factor of 5–10. All other primary results were from the force-varying contractions. Sampling at 4096 Hz provided small and statistically significant improvements over sampling at 2048 Hz (~3%), which, in turn, provided small improvements over sampling at 1024 Hz (~4%). In comparing equivalent processing variants at a sampling rate of 4096 Hz, whitening filters calibrated to the EMG spectrum of each subject generally performed best (4.74% MVC EMG-force error), followed by one universal whitening filter for all subjects (4.83% MVC error), followed by a high-pass filter whitening method (4.89% MVC error) and then a first difference whitening filter (4.91% MVC error)—but none of these statistically differed. Each did significantly improve from EMG-force error without whitening (5.55% MVC). The first difference is an excellent whitening option over this range of contraction forces since no calibration or algorithm decisions are required.
Emotion care for human well-being is important for all ages. In this paper, we propose an emotion care system based on big data analysis for autism disorder patient training, where emotion is detected in terms of facial expression. The expression can be captured through a camera as well as Internet of Things (IoT)-enabled devices. The system works with deep learning techniques on emotional big data to extract emotional features and recognize six kinds of facial expressions in real-time and offline. A convolutional neural network (CNN) model based on MobileNet V1 structure is trained with two emotional datasets, FER-2013 dataset and a new proposed dataset named MCFER. The experiments on three strategies showed that the proposed system with deep learning model obtained an accuracy of 95.89%. The system can also detect and track multiple faces as well as recognize facial expressions with high performance on mobile devices with a speed of up to 12 frames per second.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.