2020
DOI: 10.5391/ijfis.2020.20.1.59
|View full text |Cite
|
Sign up to set email alerts
|

Stable Acquisition of Fine-Grained Segments Using Batch Normalization and Focal Loss with L1 Regularization in U-Net Structure

Abstract: Acquisition of fine-grained segments in semantic segmentation is important in most sementic segmentation applications, especially for clothing images composed of fine-grained textures. However, most existing semantic segmentation methods based on fully convolutional network (FCN) were not enough to acquire fine-grained segments because they are based on a single resolution and can not well distinguish between objects in the images. To stabilize the acquisition of fine-grained segments, we propose a method that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 14 publications
(30 reference statements)
0
4
0
Order By: Relevance
“…In order to make the results more intuitive, they are shown in Figures 17 and 18 The first attempt was to add batch normalization to the network [27]. The essence of the neural network learning process is to learn about data distribution [28]. Therefore, the introduction of batch normalization to normalize the data in the network can improve the network promotion ability and speed up the training speed to a certain extent.…”
Section: Further Optimizationmentioning
confidence: 99%
“…In order to make the results more intuitive, they are shown in Figures 17 and 18 The first attempt was to add batch normalization to the network [27]. The essence of the neural network learning process is to learn about data distribution [28]. Therefore, the introduction of batch normalization to normalize the data in the network can improve the network promotion ability and speed up the training speed to a certain extent.…”
Section: Further Optimizationmentioning
confidence: 99%
“…The network structure is based on U-Net with some modifications. Since blood vessels have distributed sizes, and the U-net structure is little sensitive to very high spatial frequency information [48], there is a deviation in the reconstruction of peripheral blood vessels. So we added the batch normalization (BN) [49] method after each down-sampling convolutional layer, to overcome this problem (as shown in Figure 2).…”
Section: B Network Architecturementioning
confidence: 99%
“…Focal loss was proposed by He et al [21] in 2017. Focal loss takes the different level of training difficulty of samples into consideration and focuses more on the difficult-to-train samples; therefore, it has been applied in many fields, such as object detection, imbalanced data classification, and so on [22][23][24]. To identify classes with fewer training samples more accurately, a modified focal loss function is used to replace cross-entropy loss function in the proposed model.…”
Section: Related Workmentioning
confidence: 99%