2019
DOI: 10.1109/lra.2019.2896759
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Exploration of Complex Underwater Environments Using a Probabilistic Next-Best-View Planner

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 43 publications
(27 citation statements)
references
References 18 publications
0
27
0
Order By: Relevance
“…This representation is used by the view planner to calculate a set of candidate viewpoints and by the path planner to obtain a free path from the current position to the next-best-viewpoint. This article focuses on the active SLAM part of the system, whereas the view planner used is the one reported in [3]. Note, however, that this exploration framework could use any view planner capable of computing a set of candidate viewpoints that further explore the scene and the number of unknown cells observable from them, given an octree that represents the already explored region.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…This representation is used by the view planner to calculate a set of candidate viewpoints and by the path planner to obtain a free path from the current position to the next-best-viewpoint. This article focuses on the active SLAM part of the system, whereas the view planner used is the one reported in [3]. Note, however, that this exploration framework could use any view planner capable of computing a set of candidate viewpoints that further explore the scene and the number of unknown cells observable from them, given an octree that represents the already explored region.…”
Section: Methodsmentioning
confidence: 99%
“…Despite the fact that the method combines an off-line view planning and SLAM, it does not check if the resulting trajectory will allow the SLAM algorithm to keep the localization uncertainty bounded, enforcing only some overlap between viewpoints. We have also presented solutions that do not require a preliminary map, but that do not localize the vehicle while exploring the 2D [27] or 3D [3] scene, nor use the robot state uncertainty to drive the exploration.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, computing coverage percentage without the original reference model is considered difficult. The work presented in [19,30] terminates if no viewpoint has an IG above a predefined threshold while the work in [31] terminates if the difference in total entropy reduction decreased below a predefined threshold. Three termination conditions are used in our Guided NBV method: comparing the current NBV process iteration number with a predefined maximum iterations number, checking the global entropy E change after n iterations [32], or when no valid viewpoints are generated by the viewpoint sampling component.…”
Section: Terminationmentioning
confidence: 99%
“…The work in [17] shows that frontier method consumes time on evaluating the various amounts of viewpoints in outdoor environments while Oshima et al [18] was able to mitigate this by finding the centroids of randomly scattered points near the frontiers using clustering, and then evaluating them as candidate viewpoints. A similar approach is performed in [19], which generates viewpoints randomly and prioritizes them, and then selects the viewpoints based on the visibility and distance. In addition to this, the work in [16] selects the frontiers that minimizes the velocity changes.…”
Section: Introductionmentioning
confidence: 99%