2019
DOI: 10.1080/0952813x.2019.1653383
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal classifier fusion with feature cooperation for glaucoma diagnosis

Abstract: Background: Glaucoma is a major public health problem that can lead to an optic nerve lesion, requiring systematic screening in the population over 45 years of age. The diagnosis and classification of this disease have had a marked and excellent development in recent years, particularly in the machine learning domain. Multimodal data have been shown to be a significant aid to the machine learning domain, especially by its contribution to improving data driven decisionmaking. Method: Solving classification prob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(12 citation statements)
references
References 72 publications
0
12
0
Order By: Relevance
“…ese results show that it is beneficial to treat different model features independently. Moreover, HRT data with multiple modalities plays a crucial role in clinical diagnosis [58]. More importantly, our proposed SCRD-Net model achieved a sensitivity of 78.3% at a specificity of 0.95, demonstrating an improvement of 7.3% when the structural and textural features are combined.…”
Section: Ablation Experimentsmentioning
confidence: 80%
“…ese results show that it is beneficial to treat different model features independently. Moreover, HRT data with multiple modalities plays a crucial role in clinical diagnosis [58]. More importantly, our proposed SCRD-Net model achieved a sensitivity of 78.3% at a specificity of 0.95, demonstrating an improvement of 7.3% when the structural and textural features are combined.…”
Section: Ablation Experimentsmentioning
confidence: 80%
“…The classification was performed with Randon Forest (RF). Another study that combined nonstructural information with features from CNNs was Benzebouchi et al [10], in which the authors proposed a multimodal representation based on extracting features from different CNNs with features non-structural from GLCM, Hu Moments, and Central Moments.…”
Section: Related Workmentioning
confidence: 99%
“…The intermediate fusion strategy fuses features before the final decision layer. In contrast, late fusion fuses the decision results, ignoring any correlation between the different modalities [3].…”
Section: Intermediate Fusionmentioning
confidence: 99%