Proceedings of the 35th Annual ACM Symposium on Applied Computing 2020
DOI: 10.1145/3341105.3373963
|View full text |Cite
|
Sign up to set email alerts
|

Markerless gait analysis in stroke survivors based on computer vision and deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 32 publications
(28 citation statements)
references
References 18 publications
1
27
0
Order By: Relevance
“…Lastly in future investigations, our methodology can be used for a complete characterization of the modified functional reach test in an easy way. It could also be transposed or coupled with marker-less algorithms for movement analysis as performed in [ 57 , 58 , 59 , 60 , 61 ], which are taking place in research and in the clinical practice to facilitate the clinical assessments [ 58 ]. Currently, given the pandemic situation, those could be even more important, since they can facilitate clinical assessments from home and/or telerehabilitation, also it may be coupled with haptic feedback to improve its outcome [ 62 ].…”
Section: Discussionmentioning
confidence: 99%
“…Lastly in future investigations, our methodology can be used for a complete characterization of the modified functional reach test in an easy way. It could also be transposed or coupled with marker-less algorithms for movement analysis as performed in [ 57 , 58 , 59 , 60 , 61 ], which are taking place in research and in the clinical practice to facilitate the clinical assessments [ 58 ]. Currently, given the pandemic situation, those could be even more important, since they can facilitate clinical assessments from home and/or telerehabilitation, also it may be coupled with haptic feedback to improve its outcome [ 62 ].…”
Section: Discussionmentioning
confidence: 99%
“…Three studies have examined 2D monocular applications (25–60 Hz) of DeepLabCut against manual labelling or marker-based methods for the leg closest to the camera (sagittal view), in underwater running ( Cronin et al, 2019 ), countermovement jumping ( Drazan et al, 2021 ) and walking in stroke survivors ( Moro et al, 2020 ). Markerless joint center differences were 10–20 mm greater than marker-based motion capture, but no significant differences were found between methods for temporospatial and joint angle outcome measures during walking and underwater running, and therefore this method may be a suitable alternative to 2D marker-based motion capture ( Cronin et al, 2019 ; Moro et al, 2020 ). Strong correlations were found for joint angles during countermovement jumping compared to marker-based methods, however this study had to perform a knee and hip correction based on marker-based results (5.6°).…”
Section: Performance Of Current Markerless Applicationsmentioning
confidence: 99%
“…While markerless systems may still be considered in their infancy, there have been several studies that demonstrate markerless potential for clinical applications. DeepLabCut was used to extract walking sagittal 2D joint angles in stroke survivors, showing significant differences between the affected and unaffected side ( Moro et al, 2020 ). Cunningham et al (2019) examined 2D monocular segment angles of a multi-segmented trunk and head in young children with cerebral palsy, enabling automation of clinical tests to examine spine and head posture.…”
Section: Practical Applicationsmentioning
confidence: 99%
“…To fine tune the network on our data, we consider 45 subjects and we randomly select 15 frames for each viewpoint (45 frames for each subject), then we manually label the position in the image plane of the landmarks: 4 markers on the bow, 5 markers on the violin (the 6th one is excluded because almost always occluded), 3 anatomic landmarks on the body -see Figure 2. The parameters used to train the network are the ones suggested in other applications, see [14]. Once the network is trained, for each test frame it provides a set of 2D landmarks…”
Section: D Landmarks Detectionmentioning
confidence: 99%