Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292)
DOI: 10.1109/robot.2002.1013490
|View full text |Cite
|
Sign up to set email alerts
|

Mobile robot localization using an incremental eigenspace model

Abstract: Abstract-When using appearance-based recognition for self-localization of mobile robots, the images obtained during the exploration of the environment need to be efficiently stored in the memory. PCA offers means for representing the images in a low-dimensional subspace, which allows for efficient matching and recognition. For active exploration it is necessary to use an incremental method for the computation of the subspace. While such methods have been considered before, only the on-line construction of eige… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
43
0

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 65 publications
(43 citation statements)
references
References 13 publications
0
43
0
Order By: Relevance
“…The problem is made very difficult by at least three factors: (a) the input space is huge, since we deal with images, usually at a reasonable resolution and in colour; (b) images of the same place can be quite different as illumination conditions change and moving obstacles get in the way; and (c), recognition must be done on-line in real time, as the robot is moving around. The topic is widely researched, but incremental learning approaches have been so far mostly used for constructing the geometrical map, or the environment representation, online [6,1]. Robustness to illumination changes, and more generally to realistic visual variations in time, has been addressed in [26], where it was shown that a pure learning approach can be very effective for tackling the first two issues: indeed it was demonstrated that an approach based upon Support Vector Machines (SVM, see, e.g., [4]) in batch mode could achieve a remarkable robustness to illumination changes and variability due to the normal use of the environments.…”
Section: Introductionmentioning
confidence: 99%
“…The problem is made very difficult by at least three factors: (a) the input space is huge, since we deal with images, usually at a reasonable resolution and in colour; (b) images of the same place can be quite different as illumination conditions change and moving obstacles get in the way; and (c), recognition must be done on-line in real time, as the robot is moving around. The topic is widely researched, but incremental learning approaches have been so far mostly used for constructing the geometrical map, or the environment representation, online [6,1]. Robustness to illumination changes, and more generally to realistic visual variations in time, has been addressed in [26], where it was shown that a pure learning approach can be very effective for tackling the first two issues: indeed it was demonstrated that an approach based upon Support Vector Machines (SVM, see, e.g., [4]) in batch mode could achieve a remarkable robustness to illumination changes and variability due to the normal use of the environments.…”
Section: Introductionmentioning
confidence: 99%
“…Well-known methods for indoor localization are based on PCA (Artac et al, 2002), (Jogan et al, 2003), or on Integral Invariant features (Wolf et al, 2005). The main advantage of global methods over local techniques is that image similarities can be computed much faster.…”
Section: Introductionmentioning
confidence: 99%
“…multi-dimensional histograms [8], [9] computed over the entire image. In case of omni-directional views representations in terms of eigenviews obtained by principal component analysis were applied successfully both for topological and metric model acquisition, thanks to small variations of the image appearance within a location and rotationally invariant image representations [10], [11]. The use of local point features for both metric and topological localization was proposed by [12], [13].…”
Section: Introduction and Related Workmentioning
confidence: 99%