2018
DOI: 10.1007/978-3-319-78759-6_25
|View full text |Cite
|
Sign up to set email alerts
|

Medical Image Classification with Hand-Designed or Machine-Designed Texture Descriptors: A Performance Evaluation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Deniz et al [16] used a fine-tuning technique and found that fine-tuning the last layer of pretrained AlexNet outperformed the SVM classifier that used the feature fusion of both pretrained AlexNet and VGG16. Badejo et al [17] discovered that LBP outperformed AlexNet when comparing hand-crafted feature descriptors to the extracted features of pretrained AlexNet. For the first time, a magnification-independent binary classification (MIB) was introduced in [28].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Deniz et al [16] used a fine-tuning technique and found that fine-tuning the last layer of pretrained AlexNet outperformed the SVM classifier that used the feature fusion of both pretrained AlexNet and VGG16. Badejo et al [17] discovered that LBP outperformed AlexNet when comparing hand-crafted feature descriptors to the extracted features of pretrained AlexNet. For the first time, a magnification-independent binary classification (MIB) was introduced in [28].…”
Section: Literature Reviewmentioning
confidence: 99%
“…These recognition systems were implemented using custom CNNs with the structure captured in Table 5. CNN was considered the tool for the development of the model in this work because it has been proven to yield better performances compared with hand-crafted methods especially large-scale data samples [48]. Finally, to improve the performance of the voice recognition system, the concept of voting was adopted for the predictions of spectrograms generated from an utterance.…”
Section: Unimodal Systems Designmentioning
confidence: 99%
“…A deep CNN consists of an input layer that contains image data of m training examples, multiple hidden layers that compute features from input images and an output layer, which classifies the learned images. Deep learning models employ non-linear transformation functions to solve complex large-scale problems (Reyes et al, 2015;Li et al, 2016;Badejo et al, 2018). As shown in Figure 1, the hidden layers consist of stacked convolution layers that convolve using a Rectified Linear Unit (ReLU) activation (or transfer) function, as well as a pooling layer, which reduces the dimension of the convoluted image.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%