Photonics and Education in Measurement Science 2019 2019
DOI: 10.1117/12.2530333
|View full text |Cite
|
Sign up to set email alerts
|

Efficient 3D object tracking approach based on convolutional neural network and Monte Carlo algorithms used for a pick and place robot

Abstract: Currently, Deep Learning (DL) shows us powerful capabilities for image processing. But it cannot output the exact photometric process parameters and shows non-interpretable results. Considering such limitations, this paper presents a robot vision system based on Convolutional Neural Networks (CNN) and Monte Carlo algorithms. As an example to discuss about how to apply DL in industry. In the approach, CNN is used for preprocessing and offline tasks. Then the 6-DoF object position are estimated using a particle … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…The increased use of IMUs promoted the development of advanced algorithmic methods (e.g., Kalman filters and machine learning, Table 7 ) for data processing and estimation of parameters that are not directly measured with inertial systems [ 17 ]. Optoelectronic technologies performed better and with higher tracking accuracy in human–robot collaboration tasks [ 33 ] and robot trajectory planning [ 32 , 46 , 52 ], due to the favourable conditions in such applications (e.g., the limited working volume and the known robot configurations) which allowed cameras to avoid obstructions. In general, hybrid systems that incorporate both vision and inertial sensors were found to have improved tracking performance in noisy and highly dynamic industrial environments, compensating for drift issues of inertial sensors and long-term occlusions which can effect camera-based systems [ 15 , 40 ].…”
Section: Discussionmentioning
confidence: 99%
“…The increased use of IMUs promoted the development of advanced algorithmic methods (e.g., Kalman filters and machine learning, Table 7 ) for data processing and estimation of parameters that are not directly measured with inertial systems [ 17 ]. Optoelectronic technologies performed better and with higher tracking accuracy in human–robot collaboration tasks [ 33 ] and robot trajectory planning [ 32 , 46 , 52 ], due to the favourable conditions in such applications (e.g., the limited working volume and the known robot configurations) which allowed cameras to avoid obstructions. In general, hybrid systems that incorporate both vision and inertial sensors were found to have improved tracking performance in noisy and highly dynamic industrial environments, compensating for drift issues of inertial sensors and long-term occlusions which can effect camera-based systems [ 15 , 40 ].…”
Section: Discussionmentioning
confidence: 99%