2021
DOI: 10.1016/j.future.2020.09.032
|View full text |Cite
|
Sign up to set email alerts
|

Incremental multi-agent path finding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…11. Comparison method A is based on reference [13] and reproduces a state-of-the-art method that accounts for changes in the environment, which we have updated to fit the simulation environment. Therefore, it cannot plan the environment changes associated with task execution in advance but can respond by re-planning as changes occur.…”
Section: ) Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…11. Comparison method A is based on reference [13] and reproduces a state-of-the-art method that accounts for changes in the environment, which we have updated to fit the simulation environment. Therefore, it cannot plan the environment changes associated with task execution in advance but can respond by re-planning as changes occur.…”
Section: ) Methodsmentioning
confidence: 99%
“…Semiz et al [13] proposed a multi-agent path search that treats environment changes as changes in the available nodes of the environment represented as a graph. The environment change treated here is the closest to the definition this study dealt with, in that the available nodes change.…”
Section: A Studies That Consider Environment Changesmentioning
confidence: 99%
“…To address the problem of traditional deep learning algorithms requiring long development time, Gao Jun li et al proposed a phased cultivation method PRM- TD3 to shorten the development time, which consists of PRM undergoing global planning TD3 and conducting local training for better flexibility in single-step time but performing poorly in overall path time [ 23 ]. In a multi-robot cooperative task, Semiz, Fatih et al performed initial planning based on the conflict search and used D* for the underlying planning of individual robots to improve resilience in dynamic scenarios [ 24 ]. D* has been used to determine the path cost node, and PSO has been used to optimize the control execution trajectory at the execution level [ 25 ].…”
Section: Related Workmentioning
confidence: 99%
“…Based on the CBS algorithm, many improved methods have been proposed. Fatih Semiz [23] proposed an incremental algorithm by replacing the low-level A* algorithm of CBS with D*-lite for the multi-agent path planning problem in a dynamic environment. Bare [24] proposed an enhanced CBS (ECBS) algorithm that replaces the best-first search algorithm in the high and low levels of CBS with a focused search.…”
Section: Introductionmentioning
confidence: 99%