2020
DOI: 10.1088/1757-899x/801/1/012134
|View full text |Cite
|
Sign up to set email alerts
|

Automated pneumatic vacuum suction robotic arm with computer vision

Abstract: In this paper, a 4 DOF robotic arm that made of acrylic was built and equipped with a 5 MP USB camera, a pneumatic vacuum suction cup and a small air pump. An image will be captured by the camera and then processed with computer vision algorithms. The algorithm will extract the coordinate from the object image. The extracted coordinate data from test objects then fed to the microcontroller. Pulses were generated by the microcontroller to the driver module to control the motors. Some parameters were changed and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…YOLOv5 uses GIOU_LOSS as the loss function, which is composed of bounding box confidence loss L conf , category loss L cla , and coordinate loss L GIOU [23,24] . The calculation of the loss function is shown in Equations ( 3)- (5).…”
Section: Loss Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…YOLOv5 uses GIOU_LOSS as the loss function, which is composed of bounding box confidence loss L conf , category loss L cla , and coordinate loss L GIOU [23,24] . The calculation of the loss function is shown in Equations ( 3)- (5).…”
Section: Loss Functionmentioning
confidence: 99%
“…Ruan et al [4] reviewed two main methods of fruit location and recognition, including digital image processing techniques and algorithms based on deep learning, and conducted target tracking in dynamic jamming environments. Do et al [5] generated both target and reference model point clouds based on a depth camera system. Lim et al [6] extracted the coordinates of feature points of boxes with different shapes and then controlled the robot.…”
Section: Introductionmentioning
confidence: 99%