2023
DOI: 10.1109/access.2022.3233765
|View full text |Cite
|
Sign up to set email alerts
|

Navigation Among Movable Obstacles via Multi-Object Pushing Into Storage Zones

Abstract: With the majority of mobile robot path planning methods being focused on obstacle avoidance, this paper, studies the problem of Navigation Among Movable Obstacles (NAMO) in an unknown environment, with static (i.e., that cannot be moved by a robot) and movable (i.e., that can be moved by a robot) objects. In particular, we focus on a specific instance of the NAMO problem in which the obstacles have to be moved to predefined storage zones. To tackle this problem, we propose an online planning algorithm that all… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…However, due to the sparsity of the demonstration value parameters, they cannot provide comprehensive guidance for reinforcement learning. In Bendikas et al (2023), tasks are recursively decomposed into a series of subtasks, and then the agent is initialized with the existing critic network parameters to guide the current actor, thus achieving the guiding effect of the network Q function. In Wang et al (2021b) and Wang et al (2022a), for the control of complex assembly tasks, imitation learning is used to initially learn the outline of the trajectory, and then its parameters are used for subsequent force control learning, resulting in effective assembly force control strategies.…”
Section: Related Work Trajectory Learning Methods Based On Imitation...mentioning
confidence: 99%
“…However, due to the sparsity of the demonstration value parameters, they cannot provide comprehensive guidance for reinforcement learning. In Bendikas et al (2023), tasks are recursively decomposed into a series of subtasks, and then the agent is initialized with the existing critic network parameters to guide the current actor, thus achieving the guiding effect of the network Q function. In Wang et al (2021b) and Wang et al (2022a), for the control of complex assembly tasks, imitation learning is used to initially learn the outline of the trajectory, and then its parameters are used for subsequent force control learning, resulting in effective assembly force control strategies.…”
Section: Related Work Trajectory Learning Methods Based On Imitation...mentioning
confidence: 99%
“…Compared with the literature [23], transforming the problem of moving obstacles into a simple problem of stationary obstacles solves the problem of considering the complexity of moving obstacles models in the Gaussian BLF. Compared with the literature [24], we simplified the calculation of moving obstacles in this method. The problem of moving obstacles can be integrated into complex obstacle avoidance strategies.…”
Section: βˆ†π‘‡ = 𝐾 𝑇 (𝑅 π‘œ + 𝐿)/𝑒 π‘šπ‘Žπ‘₯mentioning
confidence: 99%
“…Cong [23] presented a method for obstacle avoidance utilizing reinforcement learning, implementing Qtable values in real-world mobile robots to perform tasks within actual scenarios. Ellis [24] proposed a technique for pushing multiple objects into designated storage areas, achieving obstacle avoidance for mobile robots in environments with moving obstacles by relocating them to predefined locations. However, these existing methods can be cumbersome and challenging to integrate with the Barrier Lyapunov Function (BLF) obstacle avoidance control strategy employed in our research.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation