2021
DOI: 10.1016/j.jbiomech.2021.110414
|View full text |Cite
|
Sign up to set email alerts
|

Assessment of spatiotemporal gait parameters using a deep learning algorithm-based markerless motion capture system

Abstract: Spatiotemporal parameters can characterize the gait patterns of individuals, allowing assessment of their health status and detection of clinically meaningful changes in their gait. Video-based markerless motion capture is a user-friendly, inexpensive, and widely applicable technology that could reduce the barriers to measuring spatiotemporal gait parameters in clinical and more diverse settings. Two studies were performed to determine whether gait parameters measured using markerless motion capture demonstrat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
61
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 82 publications
(67 citation statements)
references
References 31 publications
5
61
1
Order By: Relevance
“…Standard deviations of lower limb joint angles were between 3 and 10 degrees with the markerless method compared to the marker-based method, and correlations for hip and ankle frontal and rotation planes were poor (0.26–0.51), indicating high variability of this system. Most recently, Theia3D markerless software (Theia Markerless Inc.) which uses a proprietary pose estimation algorithm was compared between an 8 camera markerless system (85 Hz) and a seven camera marker-based system (85 Hz) ( Kanko et al, 2021b ; Kanko et al, 2021c ). They reported no bias or statistical difference for walking spatial measures ( e.g.…”
Section: Performance Of Current Markerless Applicationsmentioning
confidence: 99%
See 2 more Smart Citations
“…Standard deviations of lower limb joint angles were between 3 and 10 degrees with the markerless method compared to the marker-based method, and correlations for hip and ankle frontal and rotation planes were poor (0.26–0.51), indicating high variability of this system. Most recently, Theia3D markerless software (Theia Markerless Inc.) which uses a proprietary pose estimation algorithm was compared between an 8 camera markerless system (85 Hz) and a seven camera marker-based system (85 Hz) ( Kanko et al, 2021b ; Kanko et al, 2021c ). They reported no bias or statistical difference for walking spatial measures ( e.g.…”
Section: Performance Of Current Markerless Applicationsmentioning
confidence: 99%
“…, step length, step width, velocity) and a small difference in temporal measures ( e.g. , swing time and double support time) ( Kanko et al, 2021c ). A follow-on study using the same data found average differences of 22–36 mm for joint centers and 2.6–11 degrees for flexion/extension and abduction/adduction, although rotation about the longitudinal axis differences were 6.9–13.2 degrees compared to marker-based methods ( Kanko et al, 2021b ).…”
Section: Performance Of Current Markerless Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…The data collection setup and procedure were previously used to compare spatiotemporal gait parameter measurements between marker-based and markerless motion capture (Kanko et al, 2021b).…”
Section: Experimental Setup and Proceduresmentioning
confidence: 99%
“…If the action in volleyball video needs to be extracted by intelligent analysis and description, it needs to be realized by intelligent algorithm in image recognition technology [ 26 ]. With the development of artificial intelligence technology, in terms of data processing of video information, image recognition technology based on deep learning algorithm has become a research hotspot and development trend in the fields of video and image information mining and data association analysis [ 27 , 28 ]. What video intelligent description technology needs to solve is to divide the target data set in the image or video group into multiple groups according to a certain correlation law.…”
Section: Methodsmentioning
confidence: 99%