2021
DOI: 10.1016/j.resconrec.2021.105685
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning computer vision for the separation of Cast- and Wrought-Aluminum scrap

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 20 publications
0
11
0
Order By: Relevance
“…As shown in bold in Table II, the best object detection results are generally obtained by combining RGB and ED images, which show, in the case of Faster R-CNN, an increase of 10% in AP50:95 when compared to RGB images. Previous work has shown that mid-to-late fusion of RGB and depth images can improve object recognition [23], [27]. For the Mask R-CNN, there is a slight difference between whether the RGB and RGBD or RGBED inputs are used.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…As shown in bold in Table II, the best object detection results are generally obtained by combining RGB and ED images, which show, in the case of Faster R-CNN, an increase of 10% in AP50:95 when compared to RGB images. Previous work has shown that mid-to-late fusion of RGB and depth images can improve object recognition [23], [27]. For the Mask R-CNN, there is a slight difference between whether the RGB and RGBD or RGBED inputs are used.…”
Section: Resultsmentioning
confidence: 99%
“…However, for RGBD fusion, the RGB and D images are fed into two separate subnetworks with the same architecture. Then, the parameters of layers C2 and C3 are concatenated in both subnetworks to obtain a uniform output [23]. In the following step, ResNet50_FPN_RGB, ResNet50_FPN_RGBD, or ResNet50_FPN_RGBED will denote either the ResNet50_FPN architecture for pure RGB images or the modified ResNet50_FPN for the fusion of RGB and D/ED images.…”
Section: A Backbone Structurementioning
confidence: 99%
“…Both systems were implemented in an automatic sorting machine, showing a sorting performance average accuracy of up to 94.71% ± 1.69 (Zhang et al, 2021). In an earlier study, the presented research built on the classification of C&W Al by evaluating five CNN Deep Learning models and two transfer learning methods (Díaz-Romero et al, 2021). This study showed that the fusion of RGB and 3D images at the last layer of the DenseNet network improves the classification of the evaluated dataset.…”
Section: Deep Learning In Recyclingmentioning
confidence: 92%
“…A dataset of 120 C, 428 W Al scrap samples and 134 SS samples of different shapes (e.g., compact, bar, sheet, pipe, and irregular) with a mass distribution between 5 to 200 grams (g) was collected from a Belgian recycling facility. The Wrought and Cast pieces were used in a previous study to classify Al scraps (Díaz-Romero et al, 2021). The 548 Al samples (C&W) were collected randomly from the Twitch fraction.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation