2022
DOI: 10.1063/5.0084593
|View full text |Cite
|
Sign up to set email alerts
|

Segmentation and accurate identification of large carious lesions on high quality x-ray images based on Attentional U-Net model. A proof of concept study

Abstract: Dental caries is a bacterial infectious disease that destroys the structure of teeth. It is one of the main diseases that endanger human health [R. H. Selwitz, A. I. Ismail, and N. B. Pitts, Lancet 369(9555), 51–59 (2007)]. At present, dentists use both visual exams and radiographs for the detection of caries. Affected by the patient's dental health and the degree of caries demineralization, it is sometimes difficult to accurately identify some dental caries in x-ray images with the naked eye. Therefore, denti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…(to be continued on next page) (533/59/-) PAN 3 dentists (+10y) yes [77] (1250/-/125) PERI dentists (unspecified) yes [81] (3293/252/141) BTW 4 dentists (+3y) yes [82] (492/65/64) BTW 1 dentist + 1 radiologist (+9y) yes [15] (800/-/200) BTW 2 dentists (+10y) yes [83] photos + 120 x-rays * [57] Photos + X-Rays 2 dentists (+3y) yes [1] (1071/-/89) PAN 4 dentists (3-15y) yes [88] BTW 1 experienced dentist in oral radiology yes [38] PAN radiologists (unspecified) yes [34] (30/11/10) PAN 1 dentist + 1 radiologist yes [40] (447/127/61) PAN -yes [37] (485/69/139) PERI 2 dentists + 1 resident dentist yes [61] (80/-/20) PAN 2 dentists yes [84] (400/50/50) BTW 1 radiologist (11y) and a research assistant (3y) yes [62] (1200/150/150) PAN Clinicians (unspecified) yes [85] (935/117/117) PERI 1 radiologist (12y) and a research assistant (2y) yes [51] (76/-/32) PAN 3 dentists (+3y) yes [52] (980/-/420) PAN 3 dentists (+3y) yes [63] (90/-/10) PAN Radiologists (unspecified, +5y) yes [53] (1104/111/121) RVG Dentists (unspecified) yes [89] (175/-/75) PAN Dentists (unspecified, +5y) yes [87] (457/-/195) Unspecified 4 dentists yes [78] (193/83/1224) PAN -no [66] (2507/835/835) PAN 1 dentist + 1 dental student (last year) yes [79] PANs + 682 PERI/BTW * [3] PAN + PERI + BTW 3 dentists yes [86] PAN dentists (unspecified) yes [35] (1005/335/335) BTW 2 dentists yes [80] (1000/0/200) BTW Dentists (unspecified) no [41] ( (iii) number and expertise of professionals involved in the annotation task and (iv) whether the paper presents any information about the dataset annotation protocol. This information is used to investigate RQ1 Concerning RQ1.1, the selected papers exploited three different types of radiograph: panoramic (forty-six papers, 66.67%), periapical (nine papers, 13.04%), and bitewing (eight papers, 11.59%).…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…(to be continued on next page) (533/59/-) PAN 3 dentists (+10y) yes [77] (1250/-/125) PERI dentists (unspecified) yes [81] (3293/252/141) BTW 4 dentists (+3y) yes [82] (492/65/64) BTW 1 dentist + 1 radiologist (+9y) yes [15] (800/-/200) BTW 2 dentists (+10y) yes [83] photos + 120 x-rays * [57] Photos + X-Rays 2 dentists (+3y) yes [1] (1071/-/89) PAN 4 dentists (3-15y) yes [88] BTW 1 experienced dentist in oral radiology yes [38] PAN radiologists (unspecified) yes [34] (30/11/10) PAN 1 dentist + 1 radiologist yes [40] (447/127/61) PAN -yes [37] (485/69/139) PERI 2 dentists + 1 resident dentist yes [61] (80/-/20) PAN 2 dentists yes [84] (400/50/50) BTW 1 radiologist (11y) and a research assistant (3y) yes [62] (1200/150/150) PAN Clinicians (unspecified) yes [85] (935/117/117) PERI 1 radiologist (12y) and a research assistant (2y) yes [51] (76/-/32) PAN 3 dentists (+3y) yes [52] (980/-/420) PAN 3 dentists (+3y) yes [63] (90/-/10) PAN Radiologists (unspecified, +5y) yes [53] (1104/111/121) RVG Dentists (unspecified) yes [89] (175/-/75) PAN Dentists (unspecified, +5y) yes [87] (457/-/195) Unspecified 4 dentists yes [78] (193/83/1224) PAN -no [66] (2507/835/835) PAN 1 dentist + 1 dental student (last year) yes [79] PANs + 682 PERI/BTW * [3] PAN + PERI + BTW 3 dentists yes [86] PAN dentists (unspecified) yes [35] (1005/335/335) BTW 2 dentists yes [80] (1000/0/200) BTW Dentists (unspecified) no [41] ( (iii) number and expertise of professionals involved in the annotation task and (iv) whether the paper presents any information about the dataset annotation protocol. This information is used to investigate RQ1 Concerning RQ1.1, the selected papers exploited three different types of radiograph: panoramic (forty-six papers, 66.67%), periapical (nine papers, 13.04%), and bitewing (eight papers, 11.59%).…”
Section: Datasetsmentioning
confidence: 99%
“…One study (1.45%) combined panoramic and periapical [45], another combined periapical and bitewing [74], another combined panomaric, periapical and bitewing [79], another used RadioVisioGraphy (RVG) [53] and another manipulated panoramic and high-definition photographs of the mouth [83]. One article did not specify the type of radiograph [87]. The average number of samples per dataset is 1234 images.…”
Section: Datasetsmentioning
confidence: 99%
“…In this paper, the binary cross entropy of pixels was used as the target to train the U-Net model [16][17].…”
Section: Image Segmentation Using Residual Double Attention U-net Modelmentioning
confidence: 99%
“…Self-attention was one of the main reasons for the emergence of transformers in the Natural Language (NLP) domain which had a huge impact on benchmarking language model due to it being a cutting-edge technology that is now state-of-the-art in the NLP domain. The transformer model was primarily developed for neural machine translation by A. Vaswani et al [25] and then later modified by A. Dosovitskiy et al [29] to work on visual data making the idea of multi-headed self-attention possible for images without the use of convolutions. Since then, there have been numerous variations of transformer model used for computer vision applications like image classification, object detection, and image segmentation.…”
Section: Attention-based and Transformer Basedmentioning
confidence: 99%