2021
DOI: 10.1101/2021.04.02.438126
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons

Abstract: Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., high-level controllers), we designed an environment recognition system using computer vision and deep learning. We collected over 5.6 million images of indoor and outdoor real-world walking environments using a wearable camera system, of which ~923,000 images were annot… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
2

Relationship

4
2

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…Compared to radar and laser rangefinders, cameras can provide more detailed information about the field-of-view and detect physical obstacles and terrain changes in peripheral locations (Figure 3). Most environment recognition systems have used RGB cameras Diaz et al, 2018;Khademi and Simon, 2019;Laschowski et al, 2019bLaschowski et al, , 2020bLaschowski et al, , 2021bNovo-Torres et al, 2019;Da Silva et al, 2020;Zhong et al, 2020) or 3D depth cameras Varol and Massalin, 2016;Hu et al, 2018;Massalin et al, 2018;Zhang et al, 2019bZhang et al, ,c,d, 2020Krausz and Hargrove, 2021;Tschiedel et al, 2021) mounted on the chest Laschowski et al, 2019bLaschowski et al, , 2020bLaschowski et al, , 2021b, waist (Khademi and Simon, 2019;Zhang et al, 2019d;Krausz and Hargrove, 2021), or lower-limbs (Varol and Massalin, 2016;Diaz et al, 2018;Massalin et al, 2018;Zhang et al, 2019bZhang et al, ,c, 2020Da Silva et al, 2020;Zhong et al, 2020) (Table 1). Few studies have adopted head-mounted cameras for biomimicry (Novo-Torres et al, 2019;Zhong et al, 2020).…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to radar and laser rangefinders, cameras can provide more detailed information about the field-of-view and detect physical obstacles and terrain changes in peripheral locations (Figure 3). Most environment recognition systems have used RGB cameras Diaz et al, 2018;Khademi and Simon, 2019;Laschowski et al, 2019bLaschowski et al, , 2020bLaschowski et al, , 2021bNovo-Torres et al, 2019;Da Silva et al, 2020;Zhong et al, 2020) or 3D depth cameras Varol and Massalin, 2016;Hu et al, 2018;Massalin et al, 2018;Zhang et al, 2019bZhang et al, ,c,d, 2020Krausz and Hargrove, 2021;Tschiedel et al, 2021) mounted on the chest Laschowski et al, 2019bLaschowski et al, , 2020bLaschowski et al, , 2021b, waist (Khademi and Simon, 2019;Zhang et al, 2019d;Krausz and Hargrove, 2021), or lower-limbs (Varol and Massalin, 2016;Diaz et al, 2018;Massalin et al, 2018;Zhang et al, 2019bZhang et al, ,c, 2020Da Silva et al, 2020;Zhong et al, 2020) (Table 1). Few studies have adopted head-mounted cameras for biomimicry (Novo-Torres et al, 2019;Zhong et al, 2020).…”
Section: Literature Reviewmentioning
confidence: 99%
“…The latest generation of environment recognition systems has used convolutional neural networks (CNNs) for image classification (Rai and Rombokas, 2018;Khademi and Simon, 2019;Laschowski et al, 2019bLaschowski et al, , 2021bNovo-Torres et al, 2019;Zhang et al, 2019bZhang et al, ,c,d, 2020Zhong et al, 2020) (Table 3). Deep learning replaces manually extracted features with multilayer networks that can automatically and efficiently learn the optimal image features from training data.…”
Section: Literature Reviewmentioning
confidence: 99%
“…TinyML (tiny machine learning) has been seen significant rise in attention in recent years as a disruptive technology that will accelerate the widespread adoption of machine learning across industries and society. In particular, the ability to perform real-time predictions automatically using machine learning on low-cost, low-power edge and embedded devices can enable a huge swath of applications ranging from autonomous vehicles and advanced driving assistance systems [2] to intelligent exoskeletons leveraging embedded sensing information for environmental-adaptive control [21,22]. In addition, TinyML can enable greater privacy in machine learning applications by facilitating for tetherless intelligence without the need for continuous connectivity or at the very least reduce the amount of information that needs to be sent to the cloud.…”
Section: Broader Impactmentioning
confidence: 99%
“…Compared to radar and laser rangefinders, cameras can provide more detailed information about the field-of-view and detect physical obstacles and terrain changes in peripheral locations (example shown in Figure 3). Most environment recognition systems have used RGB cameras (Da Silva et al, 2020;Diaz et al, 2018;Khademi and Simon, 2019;Krausz and Hargrove, 2015;Laschowski et al, 2019b;2020b;2021b;Novo-Torres et al, 2019;Zhong et al, 2020) and/or 3D depth cameras (Hu et al, 2018;Krausz et al, 2015;Krausz and Hargrove, 2021;Massalin et al, 2018;Varol and Massalin, 2016;Zhang et al, 2019b;2019c;2019d, 2020 mounted on the chest (Krausz et al, 2015;Laschowski et al, 2019b;2020b;2021b), waist (Khademi andSimon, 2019;Krausz et al, 2019;Krausz and Hargrove, 2021;Zhang et al, 2019d), or lower-limbs (Da Silva et al, 2020;Diaz et al, 2018;Massalin et al, 2018;Varol and Massalin, 2016;Zhang et al, 2019b;2019c;2020;Zhong et al, 2020) (see Table 1). Few studies have adopted head-mounted cameras for biomimicry (Novo-Torres et al, 2019;Zhong et al, 2020).…”
Section: Literature Reviewmentioning
confidence: 99%
“…The latest generation of environment recognition systems has used convolutional neural networks (CNNs) for image classification (Khademi and Simon, 2019;Laschowski et al, 2019b;2021b;Novo-Torres et al, 2019;Rai and Rombokas, 2018;Zhang et al, 2019b;2019c;2019d;2020;Zhong et al, 2020) (see Table 3 and Figure 4). One of the earliest publications came from Laschowski and colleagues (2019b), who designed and trained a 10-layer convolutional neural network using five-fold cross-validation, which differentiated between three environment classes with 94.9% classification accuracy.…”
Section: Literature Reviewmentioning
confidence: 99%