2017
DOI: 10.1109/tase.2016.2549552
|View full text |Cite
|
Sign up to set email alerts
|

Visual–Tactile Fusion for Object Recognition

Abstract: The camera provides rich visual information regarding objects and becomes one of the most mainstream sensors in the automation community. However, it is often difficult to be applicable when the objects are not visually distinguished. On the other hand, tactile sensors can be used to capture multiple object properties, such as textures, roughness, spatial features, compliance, and friction, and therefore provide another important modality for the perception. Nevertheless, effective combination of the visual an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
97
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 209 publications
(98 citation statements)
references
References 52 publications
0
97
0
1
Order By: Relevance
“…Objects understanding and detection in dynamic environment changes are usually based on the adaptive background subtraction and other objects recognition methods [17,21,35,[65][66][67][68]. A preliminary scheme for the practical integration of proposed models with these algorithms is presented in Figure 7, where smog as a global environmental change has significant implications to video behaviors recognition and loitering detection within a hovering period of two persons; only half of hovering behaviors is detected; only one person is red-highlighted and the other person is always in a green rectangle, indicating the degradation of video surveillance efficiency within the considered periods under any real challenging scenarios.…”
Section: Simulation and Discussionmentioning
confidence: 99%
“…Objects understanding and detection in dynamic environment changes are usually based on the adaptive background subtraction and other objects recognition methods [17,21,35,[65][66][67][68]. A preliminary scheme for the practical integration of proposed models with these algorithms is presented in Figure 7, where smog as a global environmental change has significant implications to video behaviors recognition and loitering detection within a hovering period of two persons; only half of hovering behaviors is detected; only one person is red-highlighted and the other person is always in a green rectangle, indicating the degradation of video surveillance efficiency within the considered periods under any real challenging scenarios.…”
Section: Simulation and Discussionmentioning
confidence: 99%
“…They are more effective and more convenient to fulfill the tasks. In future, the tactile information of robotic hand can be used, [16][17][18] which will improve the grasp for teleoperation. …”
Section: Discussionmentioning
confidence: 99%
“…Dobrišek et al 20 show how audio and video combination for emotion recognition increases the performance to 77.5%, increasing in 5% with respect to the best of the individual channels, that is, audio. Liu et al 21 use visual information in combination with information from tactile sensors to capture multiple object properties, such as textures, roughness, spatial features, compliance, and friction, bridging the gap for objects that are not visually distinguishable, leading to better recognition results without any relevant drawback in execution time.…”
Section: Related Workmentioning
confidence: 99%