2022
DOI: 10.1016/j.gaitpost.2022.04.005
|View full text |Cite
|
Sign up to set email alerts
|

Ground reaction force and joint moment estimation during gait using an Azure Kinect-driven musculoskeletal modeling approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 30 publications
0
8
0
1
Order By: Relevance
“…By estimating GRF&Ms solely based on kinematics, without the need for measured force plate data, this approach therefore presents potential to expand biomechanical investigations beyond confined motion laboratories into real-life scenarios ( Larsen et al, 2020 ; Skals et al, 2021 ). Moreover, the approach can be combined with novel motion-tracking technologies such as inertial measurement units (IMUs), depth-sensor cameras, and markerless techniques ( Karatsidis et al, 2017 ; Ripic et al, 2022 ), to offer less-constrained application possibilities. Although promising, the error levels for the derived joint moments might still influence clinical decisions, and further improvements are required before the approach can be reliably translated into clinical settings.…”
Section: Discussionmentioning
confidence: 99%
“…By estimating GRF&Ms solely based on kinematics, without the need for measured force plate data, this approach therefore presents potential to expand biomechanical investigations beyond confined motion laboratories into real-life scenarios ( Larsen et al, 2020 ; Skals et al, 2021 ). Moreover, the approach can be combined with novel motion-tracking technologies such as inertial measurement units (IMUs), depth-sensor cameras, and markerless techniques ( Karatsidis et al, 2017 ; Ripic et al, 2022 ), to offer less-constrained application possibilities. Although promising, the error levels for the derived joint moments might still influence clinical decisions, and further improvements are required before the approach can be reliably translated into clinical settings.…”
Section: Discussionmentioning
confidence: 99%
“…Wearable accelerometers in combination with deep learning models [ 9 ] or depth cameras in combination with musculoskeletal models [ 10 ] might facilitate in-field prediction of multidimensional GRF. Wearable technologies to estimate vertical GRF in-field exist and use physics-based [ 11 ] and artificial intelligence (AI) approaches [ 12 , 13 , 14 ], although their ability to estimate GRF for different tasks and environments is yet to be verified.…”
Section: Introductionmentioning
confidence: 99%
“…[1][2][3] Its ability to perceive depth and track kinematic data has opened doors for low-cost and portable alternatives to otherwise expensive and elaborate methods. [4][5][6][7] Marker less motion analysis is one of such applications, wherein the Kinect has already been implemented for its potential characteristics. 3,8 The Kinect is equipped with an infrared light (IR) emitter and sensor, a colour camera and an array of microphones.…”
Section: Introductionmentioning
confidence: 99%
“…They provide two different perspectives on the scene much like human vision and triangulation is used to estimate the distance of each point from the Kinect, thus providing a stream of depth information. 7 This allows the Kinect to map out the scene and, more specifically, identify human shapes and its correspondent joints. As a result, a final skeleton can be used to recognize body postures and movements, ultimately used for motion analysis 5 and on physical therapies and rehabilitation monitoring process.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation