2018
DOI: 10.1007/s00371-018-1561-3
|View full text |Cite
|
Sign up to set email alerts
|

Unified convolutional neural network for direct facial keypoints detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
11
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 26 publications
0
11
0
Order By: Relevance
“…Modern advanced sensing systems based on "smart camera sensors" have been quickly spread into many fields of industry, such as general process control, 4 general object identification and recognition, 5 quality check (Stejskal, Bayraktar, Alghamdi), [5][6][7] reading texts and codes (e.g. car identification plate), 8,9 face recognition (Park), 10 robot control via gestures 10 and general pose control of robots using visual servoing, 3 visual navigation of single/multiple mobile robot/s (Kim, Ostertag, Mikulova, Roberti), [11][12][13][14] collision detection and perception (Lehnhardt), 15 many applications in food industry, and online process control and monitoring (Kosinar) 16 using component recognition (Stipancic, Scholz-Reiter). 17,18 Smart sensors are key elements for visual inspection of parts or visual navigation of robots for assembly tasks that has been manually done by human operators before.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…Modern advanced sensing systems based on "smart camera sensors" have been quickly spread into many fields of industry, such as general process control, 4 general object identification and recognition, 5 quality check (Stejskal, Bayraktar, Alghamdi), [5][6][7] reading texts and codes (e.g. car identification plate), 8,9 face recognition (Park), 10 robot control via gestures 10 and general pose control of robots using visual servoing, 3 visual navigation of single/multiple mobile robot/s (Kim, Ostertag, Mikulova, Roberti), [11][12][13][14] collision detection and perception (Lehnhardt), 15 many applications in food industry, and online process control and monitoring (Kosinar) 16 using component recognition (Stipancic, Scholz-Reiter). 17,18 Smart sensors are key elements for visual inspection of parts or visual navigation of robots for assembly tasks that has been manually done by human operators before.…”
Section: Introductionmentioning
confidence: 99%
“…Due to the nature of the processed data, VGRs must be able to learn and to "understand" the scene in real world, so they are often closely related to different approaches of artificial intelligence. Since they show better results for some specific tasks, many studies are being conducted on neural networks application (Park) 10 or combination of machine learning algorithms. 2,8,10,20 While the MV market offers various types of vision systems, such as PC-based systems, Smart Camera vision system, and hybrid Smart Camera vision system, 3 for most tasks, VGR systems are sufficient to apply twodimensional (2D) MV systems based on data collection from single camera.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…A wide variety of statistical methods have been proposed for object recognition using these databases in the last years. Machine learning techniques like Support Vector Machines [8,10,35], Boosting [3,31,49,50,56], Random Forests [7,13,39], and more recently Deep Learning [19,27,40] are commonly used to build classifiers robust to large intra-class variations and other artifacts such as loss of image resolution, rotations, deformations or lighting changes.…”
Section: Introductionmentioning
confidence: 99%