Over the last few years, the Leap Motion Controller™(LMC) has been increasingly used in clinical environments to track hand, wrist and forearm positions as an alternative to the gold-standard motion capture systems. Since the LMC is marker-less, portable, easy-to-use and low-cost, it is rapidly being adopted in healthcare services. This paper demonstrates the comparison of finger kinematic data between the LMC and a gold-standard marker-based motion capture system, Qualisys Track Manager (QTM). Both systems were time synchronised, and the participants performed abduction/adduction of the thumb and flexion/extension movements of all fingers. The LMC and QTM were compared in both static measuring finger segment lengths and dynamic flexion movements of all fingers. A Bland–Altman plot was used to demonstrate the performance of the LMC versus QTM with Pearson’s correlation (r) to demonstrate trends in the data. Only the proximal interphalangeal joint (PIP) joint of the middle and ring finger during flexion/extension demonstrated acceptable agreement (r = 0.9062; r = 0.8978), but with a high mean bias. In conclusion, the study shows that currently, the LMC is not suitable to replace gold-standard motion capture systems in clinical settings. Further studies should be conducted to validate the performance of the LMC as it is updated and upgraded.
Code completion has become an indispensable feature of modern Integrated Development Environments. In recent years, many approaches have been proposed to tackle this task. However, it is hard to compare between the models without explicitly re-evaluating them due to the differences of used benchmarks (e.g. datasets and evaluation metrics). Besides, almost all of these works report the accuracy of the code completion models as aggregated metrics averaged over all types of code tokens. Such evaluations make it difficult to assess the potential improvements for particularly relevant types of tokens (i.e. method or variable names), and blur the differences between the performance of the methods. In this paper, we propose a methodology called Code Token Type Taxonomy (CT3) to address the issue of using aggregated metrics. We identify multiple dimensions relevant for code prediction (e.g. syntax type, context, length), partition the tokens into meaningful types along each dimension, and compute individual accuracies by type. We illustrate the utility of this methodology by comparing the code completion accuracy of a Transformer-based model in two variants: with closed, and with open vocabulary. Our results show that the refined evaluation provides a more detailed view of the differences and indicates where further work is needed. We also survey the state-of-the-art of Machine Learning-based code completion models to illustrate that there is a demand for a set of standardized benchmarks for code completion approaches. Furthermore, we find that the open vocabulary model is significantly more accurate for relevant code token types such as usage of (defined) variables and literals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.