Over traditional machine learning approaches with state-of-the-art features, we achieved significantly improved overall performance.
The applicability of sensor-based human activity recognition in sports has been repeatedly shown for laboratory settings. However, the transferability to real-world scenarios cannot be granted due to limitations on data and evaluation methods. On the example of football shot and pass detection against a null class we explore the influence of those factors for real-world event classification in field sports. For this purpose we compare the performance of an established Support Vector Machine (SVM) for laboratory settings from literature to the performance in three evaluation scenarios gradually evolving from laboratory settings to real-world scenarios. In addition, three different types of neural networks, namely a convolutional neural net (CNN), a long short term memory net (LSTM) and a convolutional LSTM (convLSTM) are compared. Results indicate that the SVM is not able to reliably solve the investigated three-class problem. In contrast, all deep learning models reach high classification scores showing the general feasibility of event detection in real-world sports scenarios using deep learning. The maximum performance with a weighted f1-score of 0.93 was reported by the CNN. The study provides valuable insights for sports assessment under practically relevant conditions. In particular, it shows that (1) the discriminative power of established features needs to be reevaluated when real-world conditions are assessed, (2) the selection of an appropriate dataset and evaluation method are both required to evaluate real-world applicability and (3) deep learning-based methods yield promising results for real-world HAR in sports despite high variations in the execution of activities.
Monitoring stress is relevant in many areas, including sports science. In that scope, various studies showed the feasibility of stress classification using eye tracking data. In most cases, the screen-based experimental design restricted the motion of participants. Consequently, the transferability of results to dynamic sports applications remains unclear. To address this research gap, we conducted a virtual reality-based stress test consisting of a football goalkeeping scenario. We contribute by proposing a stress classification pipeline solely relying on gaze behaviour and pupil diameter metrics extracted from the recorded data. To optimize the analysis pipeline, we applied feature selection and compared the performance of different classification methods. Results show that the Random Forest classifier achieves the best performance with 87.3% accuracy, comparable to state-of-the-art approaches fusing eye tracking data and additional biosignals. Moreover, our approach outperforms existing methods exclusively relying on eye measures.
In human activity recognition har(human activity recognition (HAR)), activities are automatically recognized and classified from a continuous stream of input sensor data. Although the scientific community has developed multiple approaches for various sports in recent years, marginal sports are rarely considered. These approaches cannot directly be applied to marginal sports, where available data are sparse and costly to acquire. Thus, we recorded and annotated inertial measurement unit (IMU) data containing different types of Ultimate Frisbee throws to investigate whether Convolutional Neural Network (CNNs) and transfer learning can solve this. The relevant actions were automatically detected and were classified using a CNN. The proposed pipeline reaches an accuracy of 66.6%, distinguishing between nine different fine-grained classes. For the classification of the three basic throwing techniques, we achieve an accuracy of 89.9%. Furthermore, the results were compared to a transfer learning-based approach using a beach volleyball dataset as the source. Even if transfer learning could not improve the classification accuracy, the training time was significantly reduced. Finally, the effect of transfer learning on a reduced dataset, i.e., without data augmentations, is analyzed. While having the same number of training subjects, using the pre-trained weights improves the generalization capabilities of the network, i.e., increasing the accuracy and F1 score. This shows that transfer learning can be beneficial, especially when dealing with small datasets, as in marginal sports, and therefore, can improve the tracking of marginal sports.
Confocal Laser Endomicroscopy (CLE), an optical imaging technique allowing non-invasive examination of the mucosa on a (sub)cellular level, has proven to be a valuable diagnostic tool in gastroenterology and shows promising results in various anatomical regions including the oral cavity. Recently, the feasibility of automatic carcinoma detection for CLE images of sufficient quality was shown. However, in real world data sets a high amount of CLE images is corrupted by artifacts. Amongst the most prevalent artifact types are motion-induced image deteriorations. In the scope of this work, algorithmic approaches for the automatic detection of motion artifact-tainted image regions were developed. Hence, this work provides an important step towards clinical applicability of automatic carcinoma detection. Both, conventional machine learning and novel, deep learning-based approaches were assessed. The deep learning-based approach outperforms the conventional approaches, attaining an AUC of 0.90.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.