2022
DOI: 10.3389/fnbot.2021.730965
|View full text |Cite
|
Sign up to set email alerts
|

Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks

Abstract: Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, the current locomotion mode recognition systems being developed for automated high-level control and decision-making rely on mechanical, inertial, and/or neuromuscular sensors, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, we developed an environment classification… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 54 publications
(42 citation statements)
references
References 43 publications
0
42
0
Order By: Relevance
“…Compared to the classification results for the stair classes in the original ExoNet database [16], [20], [21], we achieved significantly higher prediction accuracies (Appendix 1). First, we discovered a number of ambiguous labelled images in the Ex-oNet classes LG-S, LG-T-DW, LG-T-IS, IS-S, IS-T-DW, and IS-T-LG.…”
Section: Discussionmentioning
confidence: 93%
See 2 more Smart Citations
“…Compared to the classification results for the stair classes in the original ExoNet database [16], [20], [21], we achieved significantly higher prediction accuracies (Appendix 1). First, we discovered a number of ambiguous labelled images in the Ex-oNet classes LG-S, LG-T-DW, LG-T-IS, IS-S, IS-T-DW, and IS-T-LG.…”
Section: Discussionmentioning
confidence: 93%
“…This differentiation was not applied to our study such that the six ExoNet classes were combined into four. The final four environment classes in our dataset included: level ground (LG), which included both ExoNet classes “level ground steady state” and “level ground transition to door/wall”; level ground – incline stairs (LG-IS), which consisted of the ExoNet class “level ground transition to incline stairs”; incline stairs (IS), which included both ExoNet classes “incline stairs steady state” and “incline stairs transition to door/wall”; and incline stairs – level ground (LG-IS), which consisted of the ExoNet class “incline stairs transition to level ground.” The dataset was randomly split into training (89.5%), validation (3.5%), and testing (7%) sets, matching the subset distribution values from Laschowski and colleagues [16], while maintaining the class distributions– i.e., 85.8% for LG, 9.3% for IS, 3.1% for LG-IS, and 1.8% for IS-LG.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Convolutional layers have also been used to extract deeper features for robust performance in relatively complex environments [18], [19]. Recently, there has been greater interest in using deep learning methods for terrain and environment classification [20]- [23]. Then, environment features or classification decision can be fused with activity modes decision, using decision trees or SVM [12], [15].…”
Section: Computer Vision For Prosthetic Controlmentioning
confidence: 99%
“…Then, environment features or classification decision can be fused with activity modes decision, using decision trees or SVM [12], [15]. For a more exhaustive review of environment sensing techniques for terrain classification, refer to [1], [2], [23].…”
Section: Computer Vision For Prosthetic Controlmentioning
confidence: 99%