2012 IEEE/RSJ International Conference on Intelligent Robots and Systems 2012
DOI: 10.1109/iros.2012.6385735
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic visual understanding of the local environment for an indoor navigating robot

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
33
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 23 publications
(33 citation statements)
references
References 27 publications
0
33
0
Order By: Relevance
“…It computes the differential of autocorrelation according to directions directly. A similar detector, named Kanade-Lucas-Tomasi (KLT) [42], was employed in [56] for efficient and continuous tracking. Compared to the Harris detector, KLT has an additional greedy corner selection criterion; thus, it is computationally more efficient.…”
Section: Corner-based Approachesmentioning
confidence: 99%
“…It computes the differential of autocorrelation according to directions directly. A similar detector, named Kanade-Lucas-Tomasi (KLT) [42], was employed in [56] for efficient and continuous tracking. Compared to the Harris detector, KLT has an additional greedy corner selection criterion; thus, it is computationally more efficient.…”
Section: Corner-based Approachesmentioning
confidence: 99%
“…a video), Tsai, et al [11] generates a set of hypotheses from the first frame of the video, and uses a Bayesian filter to evaluate the hypotheses on-line based on their abilities to explain the 2D motions of a set of tracked features. Tsai and Kuipers [10] extended the real-time scene understanding method to generate children hypotheses on-line from existing hypotheses to describe the scene in more detail. These methods simply detect c 2013.…”
Section: Introductionmentioning
confidence: 99%
“…It may be distributed unchanged freely in print or electronic forms. ) We demonstrate our attention focusing method in an on-line generate-and-test framework for scene understanding [10]. The steps with solid gray blocks are adapted from [10], and the steps with dashed blue blocks show where we select and extract the informative features.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations