In recent years, with a widespread of sensors embedded in all kind of mobile devices, human activity analysis is occurring more often in several domains like healthcare monitoring and fitness tracking. This trend did also enter the equestrian world because monitoring behaviours can yield important information about the health and welfare of horses. In this research, a deep learning-based approach for activity detection of equines is proposed to classify seven activities based on accelerometer data. We propose using Convolutional Neural Networks (CNN) by which features are extracted automatically by using strong computing capabilities. Furthermore, we investigate the impact of the sampling frequency, the time series length and the type of underground on which the data is gathered on the recognition accuracy and evaluate the model on three types of experimental datasets that are compiled of labelled accelerometer data gathered from six different subjects performing seven different activities. Afterwards, a horse-wise cross validation is carried out to investigate the impact of the subjects themselves on the model recognition accuracy. Finally, a slightly adjusted model is validated on different amounts of 50 Hz sensor data. A 99% accuracy can be reached for detecting seven behaviours of a seen horse when the sampling rate is 25 Hz and the time interval is 2.1 s. Four behaviours of an unseen horse can be detected with the same accuracy when the sampling rate is 69 Hz and the time interval is 2.4 s. Moreover, the accuracy of the model for the three datasets decreased on average with about 4.75% when the sampling rate was decreased from 200 Hz to 25 Hz and with 5.27% when the time interval was decreased from 3 s to 0.6 s. In addition, the classification performance of the activity "walk" was not influenced by the type of underground the horse was performing this movement on and even the model could conclude from which underground the data was gathered for three out of four undergrounds with accuracies above 93% at time intervals higher than 1.2 s. This ensures the evaluation of activity patterns in real world circumstances. The performance and ability of the model to generalise is validated on 50 Hz data from different horse types, using tenfold cross validation, reaching a mean classification accuracy of 97.84% and 96.10% when validated on a lame horse and pony, respectively. Moreover, in this work we show that using data from one sensors is at the cost of only 0.24% reduction in accuracy (99.42% vs 99.66%).
A thorough analysis of sports is becoming increasingly important during the training process of badminton players at both the recreational and professional level. Nowadays, game situations are usually filmed and reviewed afterwards in order to analyze the game situation, but these video set-ups tend to be difficult to analyze, expensive, and intrusive to set up. In contrast, we classified badminton movements using off-the-shelf accelerometer and gyroscope data. To this end, we organized a data capturing campaign and designed a novel neural network using different frame sizes as input. This paper shows that with only accelerometer data, our novel convolutional neural network is able to distinguish nine activities with 86% precision when using a sampling frequency of 50 Hz. Adding the gyroscope data causes an increase of up to 99% precision, as compared to, respectively, 79% and 88% when using a traditional convolutional neural network. In addition, our paper analyses the impact of different sensor placement options and discusses the impact of different sampling frequenciess of the sensors. As such, our approach provides a low cost solution that is easy to use and can collect useful information for the analysis of a badminton game.
To cope with the increasing number of co-existing wireless standards, complex machine learning techniques have been proposed for wireless technology classification. However, machine learning techniques in the scientific literature suffer from some shortcomings, namely: (i) they are often trained using data from only a single measurement location, and as such the results do not necessarily generalise and (ii) they typically do not evaluate complexity/accuracy trade-offs of the proposed solutions. To remedy these shortcomings, this paper investigates which resourcefriendly approaches are suitable across multiple heterogeneous environments. To this end, the paper designs and evaluates classifiers for LTE, Wi-Fi and DVB-T technologies using multiple datasets to investigate the complexity/accuracy trade-offs between manual feature extraction and automatic feature learning techniques. Our wireless technology classification reaches an accuracy up to 99%.
Radio spectrum has become a scarce commodity due to the advent of several non-collaborative radio technologies that share the same spectrum. Recognizing a radio technology that accesses the spectrum is fundamental to define spectrum management policies to mitigate interference. State-of-the-art approaches for technology recognition using machine learning are based on supervised learning, which requires an extensive labeled data set to perform well. However, if the technologies and their environment are entirely unknown, the labeling task becomes time-consuming and challenging. In this work, we present a Semisupervised Learning (SSL) approach for technology recognition that exploits the capabilities of modern Software Defined Radios (SDRs) to build large unlabeled data sets of IQ samples but requires only a few of them to be labeled to start the learning process. The proposed approach is implemented using a Deep Autoencoder, and the comparison is carried out against a Supervised Learning (SL) approach using Deep Neural Network (DNN). Using the DARPA Colosseum test bed, we created an IQ sample data set of 16 unknown radio technologies and obtain a classification accuracy of > 97% using the entire labeled data set using both approaches. However, the proposed SSL approach achieves a classification accuracy of ≥ 70% while using only 10% of the labeled data. This performance is equivalent to 4.6x times better classification accuracy than the DNN using the same reduced labeled data set. More importantly, the proposed approach is more robust than the DNN under corrupted input, e.g., noisy signals, which gives us to 2x and 3x better accuracy at Signal-to-Noise Ratio (SNR) of -5 dB and 0 dB, respectively.
Indoor localization knows many applications, such as industry 4.0, warehouses, healthcare, drones, etc., where high accuracy becomes more critical than ever. Recent advances in ultra-wideband localization systems allow high accuracies for multiple active users in line-of-sight environments, while they still introduce errors above 300 mm in non-line-of-sight environments due to multi-path effects. Current work tries to improve the localization accuracy of ultra-wideband through offline error correction approaches using popular machine learning techniques. However, these techniques are still limited to simple environments with few multi-path effects and focus on offline correction. With the upcoming demand for high accuracy and low latency indoor localization systems, there is a need to deploy (online) efficient error correction techniques with fast response times in dynamic and complex environments. To address this, we propose (i) a novel semi-supervised autoencoder-based machine learning approach for improving ranging accuracy of ultra-wideband localization beyond the limitations of current improvements while aiming for performance improvements and a small memory footprint and (ii) an edge inference architecture for online UWB ranging error correction. As such, this paper allows the design of accurate localization systems by using machine learning for low-cost edge devices. Compared to a deep neural network (as state-of-the-art, with a baseline error of 75 mm) the proposed autoencoder achieves a 29% higher accuracy. The proposed approach leverages robust and accurate ultra-wideband localization, which reduces the errors from 214 mm without correction to 58 mm with correction. Validation of edge inference using the proposed autoencoder on a NVIDIA Jetson Nano demonstrates significant uplink bandwidth savings and allows up to 20 rapidly ranging anchors per edge GPU.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.