2022
DOI: 10.48550/arxiv.2205.03517
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AdaptiveON: Adaptive Outdoor Local Navigation Method For Stable and Reliable Actions

Abstract: We present a novel outdoor navigation algorithm to generate stable and efficient actions to navigate a robot to the goal. We use a multi-stage training pipeline and show that our model produces policies that result in stable and reliable robot navigation on complex terrains. Based on the Proximal Policy Optimization (PPO) algorithm, we developed a novel method to achieve multiple capabilities for outdoor navigation tasks, namely: alleviating the robot's drifting, keeping the robot stable on bumpy terrains, avo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 33 publications
0
6
0
Order By: Relevance
“…Two sets of performance metrics are considered for uneven terrain navigation: safety-related metrics and motion-related metrics. The safety-related metrics encompass the robot's roll ϕ and pitch ψ angles, the change in elevation żr during the robot's movement from x s to x f , and the vibration v b , as defined in [29] as the cumulative rate of change for the roll and pitch angles: v b = |ω r |+|ω p |, with ω r = φ and ω p = ψ. A lower vibration value is indicative of a more stable trajectory.…”
Section: B Performance Metrics and Simulation Scenariosmentioning
confidence: 99%
“…Two sets of performance metrics are considered for uneven terrain navigation: safety-related metrics and motion-related metrics. The safety-related metrics encompass the robot's roll ϕ and pitch ψ angles, the change in elevation żr during the robot's movement from x s to x f , and the vibration v b , as defined in [29] as the cumulative rate of change for the roll and pitch angles: v b = |ω r |+|ω p |, with ω r = φ and ω p = ψ. A lower vibration value is indicative of a more stable trajectory.…”
Section: B Performance Metrics and Simulation Scenariosmentioning
confidence: 99%
“…Learning-based approaches, in particular DRL have pushed the boundaries of mobile robot navigation for unstructured and dynamically changing environments [25,26,8]. Nevertheless, these methods suffer from sample efficiency under sparse rewards settings because dense rewards play an important role in learning for DRL methods.…”
Section: Related Workmentioning
confidence: 99%
“…For example, deterministic policies such as DDPG [26], A3C [35], and DQN [36] with dense rewards have been incorporated for robot navigation on uneven terrains. An adaptive model that directly learns control and environmental dynamics is presented in [8]. Moreover, segmentation [37] and self supervised [25] methods for identifying navigable regions have also been utilized with a combination of DRL based navigation methods to ensure stable and efficient navigation.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations