2011
DOI: 10.1177/0278364911406760
|View full text |Cite
|
Sign up to set email alerts
|

An extended-HCT semantic description for visual place recognition

Abstract: We describe a new semantic descriptor for robots to recognize visual places. The descriptor integrates image features and color information via the hull census transform (HCT) and image histogram indexing. Our approach extracts the semantic description based on the convex hull points and statistical calculation. The color histograms are then formed by four indices and added to the descriptor. The semantic codebook consists of several places with many image descriptors. Finally, a one-versus-one (OVO) multi-cla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
11
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(12 citation statements)
references
References 42 publications
1
11
0
Order By: Relevance
“…The experiments are similar to those that were done in recent and related work (Ullah et al, 2008; Wang and Lin, 2011). The robot first learns each place at each laboratory in one illumination condition via constructing the corresponding bubble descriptors at a set of base points using one route sequence.…”
Section: Resultssupporting
confidence: 84%
See 3 more Smart Citations
“…The experiments are similar to those that were done in recent and related work (Ullah et al, 2008; Wang and Lin, 2011). The robot first learns each place at each laboratory in one illumination condition via constructing the corresponding bubble descriptors at a set of base points using one route sequence.…”
Section: Resultssupporting
confidence: 84%
“…The first set of experiments is done with COLD data set (Pronobis and Caputo, 2009) that has been used extensively to evaluate previous approaches (Ullah et al, 2008; Wang and Lin, 2011). The COLD data set consists of visual data from three different laboratories with different illumination conditions (the cloudy, sunny and night conditions).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The proposed technique provides the robot vision system with the capability of omnidirectional surveillance and 3D reconstruction. It can be used for mobile robot applications, such as obstacle detection, with the derived 3D information and vision-guided navigation using the omnidirectional images [ 35 ]. The imaging formation of the hybrid camera system is formulated using a unifying projection model.…”
Section: Discussionmentioning
confidence: 99%