2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489329
|View full text |Cite
|
Sign up to set email alerts
|

Comparing LBP, HOG and Deep Features for Classification of Histopathology Images

Abstract: Medical image analysis has become a topic under the spotlight in recent years. There is a significant progress in medical image research concerning the usage of machine learning. However, there are still numerous questions and problems awaiting answers and solutions, respectively. In the present study, comparison of three classification models is conducted using features extracted using local binary patterns, the histogram of gradients, and a pre-trained deep network. Three common image classification methods,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

2
47
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 79 publications
(50 citation statements)
references
References 25 publications
(29 reference statements)
2
47
0
1
Order By: Relevance
“…We achieved very good classification performance, with AUCs between 0.99 and 1 for distinguishing normal and tumor samples. Comparing this performance to previous work, we note that in one study of histopathology images [42], classification performance reached 81.14% accuracy using the extracted features from a pre-trained VGG 19 (similar to VGG 16) network. In a similar study of histopathological images of breast cancer [43], classification performance on 400 HE-stained images of 2048 1536 pixels each reached an AUC of 0.963 for distinguishing between non-carcinomas vs. carcinomas samples.…”
Section: Discussionsupporting
confidence: 50%
“…We achieved very good classification performance, with AUCs between 0.99 and 1 for distinguishing normal and tumor samples. Comparing this performance to previous work, we note that in one study of histopathology images [42], classification performance reached 81.14% accuracy using the extracted features from a pre-trained VGG 19 (similar to VGG 16) network. In a similar study of histopathological images of breast cancer [43], classification performance on 400 HE-stained images of 2048 1536 pixels each reached an AUC of 0.963 for distinguishing between non-carcinomas vs. carcinomas samples.…”
Section: Discussionsupporting
confidence: 50%
“…To intuitively compare the effects of our feature with other mainstream features (LBP [14], HOG [13], GLCM [15], FHOG [29]) in the task of floating object detection, a comparative experiment is conducted. The experiment occurred under seven challenging water scenes, including large waves, circular ripples, spray, turbulent water surface, strip ripples, near-view reflection, and far-view reflection.…”
Section: Intuitive Results Of Texture Detectionmentioning
confidence: 99%
“…Traditional methods of combining hand-crafted features with classifiers are simple and fast [12]. Hand-crafted texture features with grayscale invariance, e.g., Histogram of Oriented Gradient (HOG) [13], Local Binary Pattern (LBP) [14], and Gray Level Co-occurrence Matrix (GLCM) [15], are suitable for describing floating objects under uneven illumination. But these features do not utilize global information, making feature descriptors inaccurate when other sever interferences exist.…”
mentioning
confidence: 99%
“…Also, Kiefer et al[31] explored the use of deep features from several 153 pre-trained structures on Kimia24, controlling for the impact of transfer learning and finding an advantage of pre-trained networks against training from scratch. Alhindi and 155 colleagues[32] instead analyze Kimia960 for slide of origin (20 slides preselected by 156 visual ispection), and similarly to our study compare alternative classifiers as well as 157 feature extraction models in a 3-fold CV setup. Our framework differs for the DAP 158 structure and is originally applied to the larger HINT set without any visual 159…”
mentioning
confidence: 76%
“…Deep learning refers to a class of machine 28 learning methods that model high-level abstractions in data through the use of modular 29 architectures, typically composed by multiple nonlinear transformations estimated by 30 training procedures. Notably, deep learning architectures based on Convolutional 31 Neural Networks (CNNs) hold state-of-the-art accuracy in numerous image classification 32 tasks without prior feature selection. Further, intermediate steps in the pipeline of 33 transformations implemented by CNNs or other deep learning architectures can provide 34 a mapping (embedding) from the original feature space into a deep feature space.…”
mentioning
confidence: 99%