2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2021
DOI: 10.1109/bibm52615.2021.9669422
|View full text |Cite
|
Sign up to set email alerts
|

An End-to-end Entangled Segmentation and Classification Convolutional Neural Network for Periodontitis Stage Grading from Periapical Radiographic Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 31 publications
0
7
0
Order By: Relevance
“…Here, we use the previously developed DL model which integrates the segmentation and classification models and image analysis methods to measure RBL percentage and assign periodontal stages using periapical radiographs [3,20] Assigning stages using the bone loss percentage is based on the 2018 periodontitis classification [43].…”
Section: Periodontal Diagnosismentioning
confidence: 99%
See 1 more Smart Citation
“…Here, we use the previously developed DL model which integrates the segmentation and classification models and image analysis methods to measure RBL percentage and assign periodontal stages using periapical radiographs [3,20] Assigning stages using the bone loss percentage is based on the 2018 periodontitis classification [43].…”
Section: Periodontal Diagnosismentioning
confidence: 99%
“…Deep learning (DL) models have been utilized in several medical image analyses to identify abnormalities such as brain tumor segmentation [14], breast cancer diagnosis [15], lung cancer [16], prostate cancer [17], and Parkinson's disease [18] achieving higher performance than other state-of-the-art methods. In the last few years, DL models have been developed in dentistry to diagnose diseases from dental radiographs, including caries [4], radiographic bone loss (RBL) [3,19,20], and periapical lesions [21]. However, although these models have good performance in detecting abnormalities, they cannot recognize tooth numbers to provide detailed diagnostic information on individual teeth, therefore limiting the clinical application of these DL models.…”
Section: Introductionmentioning
confidence: 99%
“…Tajinda et al integrated segmentation and classification tasks for grading periodontitis from periapical radiography images to create the hybrid network for periodontitis stages from radiograph (HYNETS) end-to-end DL network. By combining segmentation networks and a classification network, HYNETS uses a multi-task learning technique to provide a complete, interpretable solution with extremely accurate and reliable results [ 83 ]. Together with data pre-processing and augmentation techniques, Szu-Yin Lin and Hao-Yun Chang have created an innovative and effective two-phase DPR detection and methodology to help dentists in diagnosis by using advanced DL algorithms.…”
Section: Resultsmentioning
confidence: 99%
“…Different types of images were used by different researchers based on the techniques they used in DI. Radiographic images [ 16 , 41 , 43 , 56 , 63 , 64 , 65 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 , 87 , 157 ], near-infrared light transillumination (NILT) [ 88 , 89 , 90 ], intraoral images [ 66 , 86 , 91 , 92 , 93 , 95 , 96 , 97 , 158 , 159 , 160 ], 3D model [ 102 , 113 , 114 , 115 , 161 ] were used in the research for dental diseases diagnostic on the 3D dental model. The studies on the dental disease’s diagnostic on the CBCT dental model used the CT images [ 124 ], 3D CT scans […”
Section: Resultsmentioning
confidence: 99%
“…Furthermore, a neural network is a useful tool for full automated segmentation, and its architecture consists of different layers for convolution, pooling, and classification 11. The necessary training data and segmentation gold standard is usually obtained by manual segmentations 12–15. Kohl et al16 proposed U-Net neural convolution network for medical image segmentation, recognizing features and position of the region of interest, and extended the U-Net from a plane to a 3-dimensional (D) structure.…”
mentioning
confidence: 99%