2010 IEEE/RSJ International Conference on Intelligent Robots and Systems 2010
DOI: 10.1109/iros.2010.5650455
|View full text |Cite
|
Sign up to set email alerts
|

Real-time 3D visual sensor for robust object recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(25 citation statements)
references
References 8 publications
0
25
0
Order By: Relevance
“…Figure 1 shows the block diagram of the proposed method. Here given an input image color and the Time of Flight (TOF) data, is segmented using SLIC super pixel [16,17]. Next several key points are extracted and labeled as a feature, for matching, using SURF [18].…”
Section: Proposed Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Figure 1 shows the block diagram of the proposed method. Here given an input image color and the Time of Flight (TOF) data, is segmented using SLIC super pixel [16,17]. Next several key points are extracted and labeled as a feature, for matching, using SURF [18].…”
Section: Proposed Methodsmentioning
confidence: 99%
“…A 3D visual sensor [16] which consists of a TOF and two CCD cameras is used to capture color and 3D information to construct a database. To obtain the visual information, a small handheld observation table with an XBee wireless controller is installed on a robot that enables the observation of the object from various viewpoints.…”
Section: Databasementioning
confidence: 99%
See 1 more Smart Citation
“…Generally, it is difficult to recognize objects that have the same color and/or with no textures. For future work, we plan to use an object recognition method that integrates 3D shape information (31), which can significantly improve object recognition performance.…”
Section: Image Processingmentioning
confidence: 99%
“…The robot can acquire visual information form the 3D visual sensor [12], auditory information by shaking the object, and haptic information by grasping it. We also propose an online algorithm of multimodal categorization based on the autonomously acquired multimodal information and words, which are partially given by the human user.…”
Section: Introductionmentioning
confidence: 99%