2021 IEEE International Conference on Image Processing (ICIP) 2021
DOI: 10.1109/icip42928.2021.9506582
|View full text |Cite
|
Sign up to set email alerts
|

Cam-Guided U-Net With Adversarial Regularization For Defect Segmentation

Abstract: Defect segmentation is critical in real-wold industrial product quality assessment. There are usually a huge number of normal (defect-free) images but a very limited number of annotated anomalous images. This poses huge challenges to exploiting Fully-Convolutional Networks (FCN), e.g., UNet, as they require sufficient anomalous images with defect annotations during training. To further leverage the information from normal data, a novel CAM-guided U-Net with adversarial regularization (CAM-UNet-AR) is proposed.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…In industrial product quality assessment, defect segmentation, i.e., delineating defective regions from anomalous products, plays a critical role in evaluating the severity of anomaly. To automate the process of defect segmentation, there is a surge of research works on exploiting fully-covolutional networks, e.g., UNet and its variants [1][2][3][4][5], to locate defective regions from 2D images of anomalous products [6][7][8][9].…”
Section: Introductionmentioning
confidence: 99%
“…In industrial product quality assessment, defect segmentation, i.e., delineating defective regions from anomalous products, plays a critical role in evaluating the severity of anomaly. To automate the process of defect segmentation, there is a surge of research works on exploiting fully-covolutional networks, e.g., UNet and its variants [1][2][3][4][5], to locate defective regions from 2D images of anomalous products [6][7][8][9].…”
Section: Introductionmentioning
confidence: 99%