2017
DOI: 10.1016/j.ijleo.2016.11.155
|View full text |Cite
|
Sign up to set email alerts
|

Semantic recognition of workpiece using computer vision for shape feature extraction and classification based on learning databases

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…The first step above forms the basis of image recognition (Song et al, 2016 ). Choosing the appropriate features directly impacts the final segmentation and recognition accuracy (Ding et al, 2017 ). Hu et al proposed an image-feature-extraction method based on shape characteristics (Hu et al, 2016 ), and Yang et al introduced multi-structure feature fusion for face recognition based on multi-resolution exaction (Yang et al, 2011 ).…”
Section: Introductionmentioning
confidence: 99%
“…The first step above forms the basis of image recognition (Song et al, 2016 ). Choosing the appropriate features directly impacts the final segmentation and recognition accuracy (Ding et al, 2017 ). Hu et al proposed an image-feature-extraction method based on shape characteristics (Hu et al, 2016 ), and Yang et al introduced multi-structure feature fusion for face recognition based on multi-resolution exaction (Yang et al, 2011 ).…”
Section: Introductionmentioning
confidence: 99%
“…The GRAPES equation matrix is three‐dimensional . When the resolution is 1, the matrix size is 360*180*38, and when the resolution is 0.5, the matrix size is 720*360*38.…”
Section: The Parallel Strategy For Solving Grapes Helmholtz Equation mentioning
confidence: 99%
“…(Ding et al, 2017) propose not only to search for more elaborate algorithms to define robot manipulation patterns, but also to accompany the computer vision of a database of visual elements to facilitate the interpretation of the environment to improve the learning and recognition of figures and objects in order to establish more natural reactions in a actions performed by robots (Ding et al ,20 17).…”
Section: Interactionmentioning
confidence: 99%