2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341416
|View full text |Cite
|
Sign up to set email alerts
|

Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits

Abstract: Understanding users' gait preferences of a lowerbody exoskeleton requires optimizing over the high-dimensional gait parameter space. However, existing preference-based learning methods have only explored low-dimensional domains due to computational limitations. To learn user preferences in high dimensions, this work presents LINECOSPAR, a human-inthe-loop preference-based framework that enables optimization over many parameters by iteratively exploring one-dimensional subspaces. Additionally, this work identif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 29 publications
(27 citation statements)
references
References 24 publications
0
27
0
Order By: Relevance
“…In many cases, the trajectories are significantly changed or fully generated at runtime, and some papers are completely dedicated to the problem of optimization/ generation of trajectories [190][191][192][193]. In some studies, model-based computations [194][195][196][197] or polynomial minimum jerk trajectory generation methods [94] have been used to generate the trajectories offline.…”
Section: Action Sublayermentioning
confidence: 99%
“…In many cases, the trajectories are significantly changed or fully generated at runtime, and some papers are completely dedicated to the problem of optimization/ generation of trajectories [190][191][192][193]. In some studies, model-based computations [194][195][196][197] or polynomial minimum jerk trajectory generation methods [94] have been used to generate the trajectories offline.…”
Section: Action Sublayermentioning
confidence: 99%
“…Common choices of link function (g p and g o ) include the Gaussian cumulative distribution function [17], [19] and the sigmoid function, g(x) = (1 + e −x ) −1 [7]. We model feedback via the sigmoid link function because empirical results suggest that a heavier-tailed noise distribution improves performance.…”
Section: Active Learning Algorithmmentioning
confidence: 99%
“…Most existing work in high-dimensional Gaussian process learning requires quantitative feedback [21], [22]. Previous work in preference-based high-dimensional Gaussian process learning [7] models the posterior over a sequence of onedimensional subspaces. However, this approach applies only to the regret minimization problem because each onedimensional subspace includes the action maximizing the posterior.…”
Section: Active Learning Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…As opposed to relying on just one field, this work explores combining the successes of both: the formality of stability from control theory and the ability to learn the relationship between complex parameter combinations and their resulting locomotive behavior from machine learning. This is accomplished by building upon our previous results [24], [25] and systematically integrating preference-based learning with gait generation via HZD optimization. The result is optimal walking on hardware based only on relative pairwise preferences from the human operator (i.e.…”
Section: Introductionmentioning
confidence: 99%