2007 IEEE/ASME International Conference on Advanced Intelligent Mechatronics 2007
DOI: 10.1109/aim.2007.4412566
|View full text |Cite
|
Sign up to set email alerts
|

Mobile robot self-localization in complex indoor environments using monocular vision and 3D model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 7 publications
0
10
0
Order By: Relevance
“…The GPU has been frequently used in robot localization for precisely this reason, including: Kinect depth-SLAM [17], image feature correspondence search for SIFT features [18], and line features [19].…”
Section: Related Workmentioning
confidence: 99%
“…The GPU has been frequently used in robot localization for precisely this reason, including: Kinect depth-SLAM [17], image feature correspondence search for SIFT features [18], and line features [19].…”
Section: Related Workmentioning
confidence: 99%
“…In [4] a vision-based method is described that uses a 3D model of the environment to localize itself. This allows for a nearly unlimited size environment and does not require a specific part of it to be in view.…”
Section: Related Workmentioning
confidence: 99%
“…Many vision-based control solutions for MAVs and other robots pose significant processing power demands [1], [2], [4], [8], mainly due to the computational costs associated with image processing and computer vision algorithms. It has been shown that the PixArt IR tracking sensor can be successfully applied in a visual servoing algorithm on a quadrotor vehicle [3], without the need for powerful processing hardware.…”
Section: Related Workmentioning
confidence: 99%
“…Kitanov et al [13] and Koch and Teller [14] present near neighbors to our approach, using hand-built 3D models of an environment. Images rendered from this model then have features extracted; these features are compared to features extracted from camera images of the actual scene.…”
Section: Prior Workmentioning
confidence: 99%