2013 International Conference on Computer and Robot Vision 2013
DOI: 10.1109/crv.2013.40
|View full text |Cite
|
Sign up to set email alerts
|

A Fast Floor Segmentation Algorithm for Visual-Based Robot Navigation

Abstract: We present a novel technique that robustly segments free-space for robot navigation purposes. In particular, we are interested in a reactive visual navigation, in which the rapid and accurate detection of free space where the robot can navigate is crucial. Contrary to existing methods that use multiple cameras in different configurations, we use a downward-facing monocular camera to search for free space in a large and complicated room environment. The proposed approach combines two techniques. First, we apply… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…al [12] proposed a technique, where they computed a score considering several factors for delcaring a candidate region as floor. In [13] the authors have presented a graphical approach to detect the floor. Authors have assumed that for every frame the upper half is less probable in containing obstacles.…”
Section: Prior Workmentioning
confidence: 99%
“…al [12] proposed a technique, where they computed a score considering several factors for delcaring a candidate region as floor. In [13] the authors have presented a graphical approach to detect the floor. Authors have assumed that for every frame the upper half is less probable in containing obstacles.…”
Section: Prior Workmentioning
confidence: 99%
“…We decided to simplify the evaluation and assumed that the interesting regions should appear on parts of the coral reef: that is, the areas that visually correspond only to water are not considered of interest. First, to divide the image into water and nonwater regions, we applied an adapted version of the robust superpixel-based classifier proposed in [31].…”
Section: Comparison Of Detected Regionsmentioning
confidence: 99%
“…It has strong integration with multiple modern robots, their 3D models and built-in navigation features. [4][5][6] The Gazebo has a number of built-in de-fault environments and basic mechanisms to construct new environments. 7 Moreover, some user-created open-source packages significantly increase the power of new environment construction, which allows to automate the process and integrate various constraints into the environment.…”
Section: Introductionmentioning
confidence: 99%