2021
DOI: 10.1109/lra.2021.3061331
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Robust Kernels for Non-Linear Least Squares Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(30 citation statements)
references
References 21 publications
1
29
0
Order By: Relevance
“…This type of approach is used in early work in dynamic covariance scaling [75] and related methods such as switchable constraints [76], [77] and max mixtures [78], which use a tunable weighting to decrease the influence of inconsistent loop closures on the optimization. In recent work, adaptive kernels for robust cost functions have been developed [79]. There are also distinct methods that check for, and exclude, inconsistent loop closures from the optimization, such as realizing, reversing, recovering (RRR) [80].…”
Section: The Slam Back-end: Robot Pose and Map Estimationmentioning
confidence: 99%
“…This type of approach is used in early work in dynamic covariance scaling [75] and related methods such as switchable constraints [76], [77] and max mixtures [78], which use a tunable weighting to decrease the influence of inconsistent loop closures on the optimization. In recent work, adaptive kernels for robust cost functions have been developed [79]. There are also distinct methods that check for, and exclude, inconsistent loop closures from the optimization, such as realizing, reversing, recovering (RRR) [80].…”
Section: The Slam Back-end: Robot Pose and Map Estimationmentioning
confidence: 99%
“…Also, their performance depends strongly on the choice of specific parameters, which often leads to manual tuning. In more recent work like [7] and [8], the parametrization issue was resolved by self-tuning algorithms based on expectation-maximization. But still, they are limited by their symmetry constraint.…”
Section: Prior Workmentioning
confidence: 99%
“…where x = [ x m y m z m u v w r 0 ] T is the optimization variable and p i is a given inlier point. Meanwhile, to avoid a refined axis being weakened by few large residuals, a robust kernel is employed, e.g., the Huber kernel [36],…”
Section: Algorithm 2: Refinement Of the Coarse Axismentioning
confidence: 99%