This study proposed a wearable device capable of recognizing six human daily activities (walking, walking upstairs, walking downstairs, sitting, standing, and lying) through a deep learning algorithm. Existing wearable devices are mainly watches or wristbands, and almost none are to be worn on the waist. Wearable devices in the forms of watches and wristbands are unfriendly to patients who are critically ill, such as patients undergoing dialysis. Patients undergoing dialysis have artificial blood vessels on their arm, and they cannot perform intense exercise. For this type of users, general hand wearable devices cannot correctly identify wearers' activities. Therefore, we proposed a waist wearable device and these types of daily life activities to assess their exercise. The hardware of the wearable device consisted of an inertial sensor, which included a microcontroller, a three-axis accelerometer, and a three-axis gyroscope. The activity recognition algorithm of the software used motion signals acquisition, signal normalization, and a feature learning method. The feature learning method was based on a 1D convolutional neural network that automatically performed feature extraction and classification from raw data. One part of the experimental data was from the dataset of the University of California (UCI), and the other part was recorded by this study. To capture the data recorded, the wearable inertial sensing device was attached to the waists of 21 experimental participants who performed six common movements in a laboratorial environment, and the subsequent records were collected to verify the validity of the proposed deep learning algorithm in relation to the inertial sensor of the wearable device. For the six common activities in the UCI dataset and the data recorded, the recognition rates in the training sample reached 98.93% and 97.19%, respectively, and the recognition rates in the testing sample were 95.99% and 93.77%, respectively.
This paper presents a machine learning scheme for dynamic time-wrapping-based (DTW) speech recognition. Two categories of learning strategies, supervised and unsupervised, were developed for DTW. Two supervised learning methods, incremental learning and priority-rejection learning, were proposed in this study. The incremental learning method is conceptually simple but still suffers from a large database of keywords for matching the testing template. The priority-rejection learning method can effectively reduce the matching time with a slight decrease in recognition accuracy. Regarding the unsupervised learning category, an automatic learning approach, called "most-matching learning, " which is based on priority-rejection learning, was developed in this study. Most-matching learning can be used to intelligently choose the appropriate utterances for system learning. The effectiveness and efficiency of all three proposed machine-learning approaches for DTW were demonstrated using keyword speech recognition experiments.
This paper presents a wearable device, fitted on the waist of a participant that recognizes six activities of daily living (walking, walking upstairs, walking downstairs, sitting, standing, and laying) through a deep-learning algorithm, human activity recognition (HAR). The wearable device comprises a single-board computer (SBC) and six-axis sensors. The deep-learning algorithm employs three parallel convolutional neural networks for local feature extraction and for subsequent concatenation to establish feature fusion models of varying kernel size. By using kernels of different sizes, relevant local features of varying lengths were identified, thereby increasing the accuracy of human activity recognition. Regarding experimental data, the database of University of California, Irvine (UCI) and self-recorded data were used separately. The self-recorded data were obtained by having 21 participants wear the device on their waist and perform six common activities in the laboratory. These data were used to verify the proposed deep-learning algorithm on the performance of the wearable device. The accuracy of these six activities in the UCI dataset and in the self-recorded data were 97.49% and 96.27%, respectively. The accuracies in tenfold cross-validation were 99.56% and 97.46%, respectively. The experimental results have successfully verified the proposed convolutional neural network (CNN) architecture, which can be used in rehabilitation assessment for people unable to exercise vigorously.
This study integrated a fiber loop manufactured by using commercial fiber (SMF-28, Corning) and a fiber Bragg grating (FBG) to form a fiber optic sensor that could simultaneously measure displacement and temperature. The fiber loop was placed in a thermoelectric cooling module with FBG affixed to the module, and, consequently, the center wavelength displacement of FBG was limited by only the effects of temperature change. Displacement and temperature were determined by measuring changes in the transmission of optical power and shifts in Bragg wavelength. This study provides a simple and economical method to measure displacement and temperature simultaneously.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.