2020
DOI: 10.1109/tro.2019.2946891
|View full text |Cite
|
Sign up to set email alerts
|

PPCPP: A Predator–Prey-Based Approach to Adaptive Coverage Path Planning

Abstract: This paper was recommended for publication by Editor I.-M. Chen upon evaluation of the reviewers comments.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 47 publications
(16 citation statements)
references
References 29 publications
0
16
0
Order By: Relevance
“…3c and 3d (for scenario 2 with 3 robots), when R d p is set to zero, the paths are not only less efficient but also very chaotic. Note that other reward functions have already been validated in the previous work [15]. Video for the scenario shown in Fig.…”
Section: A Case Study 1: Coverage Amid Stationary Obstaclesmentioning
confidence: 72%
See 4 more Smart Citations
“…3c and 3d (for scenario 2 with 3 robots), when R d p is set to zero, the paths are not only less efficient but also very chaotic. Note that other reward functions have already been validated in the previous work [15]. Video for the scenario shown in Fig.…”
Section: A Case Study 1: Coverage Amid Stationary Obstaclesmentioning
confidence: 72%
“…In Dec-PPCPP, each robot considers itself a prey that needs to avoid predation from two types of predators: 1) a stationary virtual predator, and 2) dynamic predators (other robots). In this subsection, the stationary predator avoidance reward is formulated, which is similar to the work in [15]. A prey, while searching the target area for food (to achieve coverage), aims to continually maximize it distance to a stationary predator, denoted as Ψ s i where i is the robot index.…”
Section: A Stationary Predator Avoidance Rewardmentioning
confidence: 99%
See 3 more Smart Citations