2019
DOI: 10.1007/s00170-018-3184-2
|View full text |Cite
|
Sign up to set email alerts
|

An initial point alignment method of narrow weld using laser vision sensor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…In air, using Equation ( 11), the camera's field of view is calculated to be 173.9073 mm × 309.1704 mm. = 309.1704 mm (11) where W a and H a represent the width and height of the FOV, and w and h are the width and height of the sensor, respectively. f denotes the camera's focal length.…”
Section: Results Of the Experiments On Underwater Target Position Cal...mentioning
confidence: 99%
See 1 more Smart Citation
“…In air, using Equation ( 11), the camera's field of view is calculated to be 173.9073 mm × 309.1704 mm. = 309.1704 mm (11) where W a and H a represent the width and height of the FOV, and w and h are the width and height of the sensor, respectively. f denotes the camera's focal length.…”
Section: Results Of the Experiments On Underwater Target Position Cal...mentioning
confidence: 99%
“…Alignment operations using vision sensors have been widely studied in various fields. For instance, Fan et al [11] proposed a laser-vision-sensor-based method for initial point alignment of narrow weld seams by utilizing the relationship between laser streak feature points and initial points. They obtain a high signal-to-noise image of the narrow weld seam using the laser vision sensor, and then calculate the 3D coordinates of the final image feature point and the initial point based on the alignment model.…”
Section: Introductionmentioning
confidence: 99%
“…Through this approach, the grouped convolution in ResNeXt enhances the model's accuracy without significantly increasing the parameter count. To further reduce the parameter count, the number of groups in the convolution is set to 4, and the number of blocks in the convolutional layers is reduced from the original [3,4,6,3] to [3,3,3,3], meaning each convolutional layer contains 3 block structures.…”
Section: Enhancements To Wipl-netmentioning
confidence: 99%
“…With the advancement of machine learning and computer vision, visual sensors are increasingly being applied in robotic automated welding [3,4]. How to make use of visual sensors and design intelligent visual algorithms for identification and localization has become a key and inevitable trend in achieving autonomous and intelligent welding robots [5,6].…”
Section: Introductionmentioning
confidence: 99%
“…Later, Shao et al [ 11 ], continued to develop the previously mentioned system by using a particle filter in order to make it more robust. Fan et al [ 12 ] introduced a method based on digital image processing for initial point alignment in narrow robotic welding. This method allowed detection of the weld seam center point when a laser stripe line was projected over a junction, and it was used later by Fan et al [ 13 ] to develop a weld seam tracking system.…”
Section: Introductionmentioning
confidence: 99%