2001
DOI: 10.1016/s0921-8890(00)00103-2
|View full text |Cite
|
Sign up to set email alerts
|

A robot self-localization system based on omnidirectional color images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0
1

Year Published

2002
2002
2011
2011

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 17 publications
0
7
0
1
Order By: Relevance
“…A number of research works also use colour histogram in their method for robot localization (Kawabe et al, 2006;Rizzi & Cassinis, 2001;Ulrich & Nourbakhsh, 2000). These research works previously verified that colour can be use as the features in mobile robot localization.…”
Section: Colour Featuresmentioning
confidence: 98%
“…A number of research works also use colour histogram in their method for robot localization (Kawabe et al, 2006;Rizzi & Cassinis, 2001;Ulrich & Nourbakhsh, 2000). These research works previously verified that colour can be use as the features in mobile robot localization.…”
Section: Colour Featuresmentioning
confidence: 98%
“…A panoramic image offers a 0 360 view of the environment. Because of the robustness of bearing estimates and the complete view of the environment, previous works have utilized omni-directional vision sensors in robotic navigation (Rizzi & Cassinis, 2001;Usher et al, 2003;Menegatti et al, 2004;Huang et al, 2005b). Stereo vision is another option used in robotic navigation.…”
Section: Vision Based Navigationmentioning
confidence: 99%
“…The robot position is then determined by choosing the most appropriate output with a situation-identification module. Cassinis and Rizzi [5] present a self-localization system that processes the panoramic image data through both neural networks and multiple linear regression. Their system gives low positioning error (averaging less than 10 cm), but is quite sensitive to errors if the robot is offset by more than about 5 from its original orientation.…”
Section: B Iconic Versus Feature-based Localizationmentioning
confidence: 99%
“…Assuming an arbitrary robot heading angle effectively introduces an extra dimension for the vision-based localization. Many solution architectures, including multiple linear regression and neural networks [5], do not scale well for this change. For LTRC, the mean and median orientation localization errors are less than 0.055 and 0.034 rad ( and 2.0 ), on par with the accuracy of many digital compasses.…”
Section: A Best Reference Site Is Knownmentioning
confidence: 99%