2019
DOI: 10.3390/app9183900
|View full text |Cite
|
Sign up to set email alerts
|

Texture Segmentation: An Objective Comparison between Five Traditional Algorithms and a Deep-Learning U-Net Architecture

Abstract: This paper compares a series of traditional and deep learning methodologies for the segmentation of textures. Six well-known texture composites first published by Randen and Husøy were used to compare traditional segmentation techniques (co-occurrence, filtering, local binary patterns, watershed, multiresolution sub-band filtering) against a deep-learning approach based on the U-Net architecture. For the latter, the effects of depth of the network, number of epochs and different optimisation algorithms were in… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 63 publications
1
14
0
Order By: Relevance
“…The U-Net can be trained end-to-end from relatively few pairs or patches of images and their corresponding classes. Applications of U-Nets include cell counting, detection, and morphometry, [87], automatic brain tumour detection and segmentation [88] and texture segmentation [89].…”
Section: Plos Onementioning
confidence: 99%
“…The U-Net can be trained end-to-end from relatively few pairs or patches of images and their corresponding classes. Applications of U-Nets include cell counting, detection, and morphometry, [87], automatic brain tumour detection and segmentation [88] and texture segmentation [89].…”
Section: Plos Onementioning
confidence: 99%
“…In this architecture the encoder and decoder ideally represent the rails of the ladder and the rungs their connections. The U-Net has symmetric contracting and expanding paths, and concatenates the feature maps from the encoder to the corresponding upsampled maps from the decoder by copy and crop (69,70). This allows the decoder to reconstruct relevant features that are lost when pooled in the encoder.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…Assuming that kw and kw are the width and height of the convolution kernel, respectively, 𝑊 ′ and 𝐻 ′ are calculated as 𝑊 ′ =W-kw+1 and 𝐻 ′ =H-kh+1. Each filter in the output feature map was obtained by adding the individual filter weights of the three input channels (R, G, and B) [37,38]. In this study, the features were extracted using 2D separable convolution, where the features were extracted from each channel separately.…”
Section: Phase Ii: Feature Extractionmentioning
confidence: 99%