2022 IEEE International Conference on Image Processing (ICIP) 2022
DOI: 10.1109/icip46576.2022.9897844
|View full text |Cite
|
Sign up to set email alerts
|

Localization and Classification of Parasitic Eggs in Microscpic Images Using An Efficientdet Detector

Abstract: IPIs caused by protozoan and helminth parasites are among the most common infections in humans in LMICs. They are regarded as a severe public health concern, as they cause a wide array of potentially detrimental health conditions. Researchers have been developing pattern recognition techniques for the automatic identification of parasite eggs in microscopic images. Existing solutions still need improvements to reduce diagnostic errors and generate fast, efficient, and accurate results. Our paper addresses this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…All methods in this competition exploited deep learning techniques. Most of them were developed based on the state of the art in object detection, including YOLOv5 [15,16], Fast-RCNN [13,17], EfficientDet [18], Cascade R-CNN [15,19,20], CBNetV2 [21,22], CenterNet2 [23], Task-aligned One-stage object Detection (TOOD) [17,24], and RetinaNet [25,26]. These methods are convolutional neural networks with various backbone architectures, where the most popular architecture is based on ResNet blocks [17, 22-25, 27, 28].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…All methods in this competition exploited deep learning techniques. Most of them were developed based on the state of the art in object detection, including YOLOv5 [15,16], Fast-RCNN [13,17], EfficientDet [18], Cascade R-CNN [15,19,20], CBNetV2 [21,22], CenterNet2 [23], Task-aligned One-stage object Detection (TOOD) [17,24], and RetinaNet [25,26]. These methods are convolutional neural networks with various backbone architectures, where the most popular architecture is based on ResNet blocks [17, 22-25, 27, 28].…”
Section: Methodsmentioning
confidence: 99%
“…These methods are convolutional neural networks with various backbone architectures, where the most popular architecture is based on ResNet blocks [17, 22-25, 27, 28]. Most methods employed Feature Pyramid Network (FPN) to extract multi-scale features [18,22,24,25], whilst some methods employed Transformer-based architectures as feature extractors [19,20,26,29,30]. The models were pretrained with COCO dataset [20-25, 27, 29, 30], or ImageNet-1K [19].…”
Section: Methodsmentioning
confidence: 99%
“…Zocco et al [43] improved the efficiency of EfficientDet for realtime and low-light object-detection in marine debris detection. Aldahoul et al [44] used Effi-cientDet for the localization and classification of parasitic eggs in microscopic images, achieving robust performance. Carmo et al [45] used a modified version of EfficientDet for airway segmentation in computed tomography images, achieving high Dice scores.…”
Section: Complexity and Propensity Towards False Positivesmentioning
confidence: 99%
“…Various machine learning techniques, including Convolutional Neural Networks (CNNs) [5,21,25], Sequential Minimal Optimization (SMO) [3], Support Vector Machine (SVM) classifiers [4,15], K-Nearest Neighbor (KNN) classifiers, and Artificial Neural Networks (ANNs) [24], have been employed to address the challenges inherent in this domain. The evolution of object detection models, from hand-crafted features and classifiers to deep neural networks [30,[41][42][43][44][45]49], has significantly improved computational efficiency and accuracy. Despite these advancements, the field continues to grapple with challenges such as the detection of small objects and variations in drawing styles [50].…”
Section: Complexity and Propensity Towards False Positivesmentioning
confidence: 99%