In the context of the ageing global population, researchers and scientists have tried to find solutions to many challenges faced by older people. Falls, the leading cause of injury among elderly, are usually severe enough to require immediate medical attention; thus, their detection is of primary importance. To this effect, many fall detection systems that utilize wearable and ambient sensors have been proposed. In this study, we compare three newly proposed data fusion schemes that have been applied in human activity recognition and fall detection. Furthermore, these algorithms are compared to our recent work regarding fall detection in which only one type of sensor is used. The results show that fusion algorithms differ in their performance, whereas a machine learning strategy should be preferred. In conclusion, the methods presented and the comparison of their performance provide useful insights into the problem of fall detection.
The range of applications of electromyography-based gesture recognition has increased over the last years. A common problem regularly encountered in literature is the inadequate data availability. Data augmentation, which aims at generating new synthetic data from the existing ones, is the most common approach to deal with this data shortage in other research domains. In the case of surface electromyography (sEMG) signals, there is limited research in augmentation methods and quite regularly the results differ between available studies. In this work, we provide a detailed evaluation of existing (i.e., additive noise, overlapping windows) and novel (i.e., magnitude warping, wavelet decomposition, synthetic sEMG models) strategies of data augmentation for electromyography signals. A set of metrics (i.e., classification accuracy, silhouette score, and Davies–Bouldin index) and visualizations help with the assessment and provides insights about their performance. Methods like signal magnitude warping and wavelet decomposition yield considerable increase (up to 16%) in classification accuracy across two benchmark datasets. Particularly, a significant improvement of 1% in the classification accuracy of the state-of-the-art model in hand gesture recognition is achieved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.