2019
DOI: 10.1007/s10278-019-00229-9
|View full text |Cite
|
Sign up to set email alerts
|

Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning

Abstract: Assess the efficacy of deep convolutional neural networks (DCNNs) in detection of critical enteric feeding tube malpositions on radiographs. 5475 de-identified HIPAA compliant frontal view chest and abdominal radiographs were obtained, consisting of 174 x-rays of bronchial insertions and 5301 non-critical radiographs, including normal course, normal chest, and normal abdominal x-rays. The ground-truth classification for enteric feeding tube placement was performed by two board-certified radiologists. Untrained… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
28
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(31 citation statements)
references
References 10 publications
1
28
2
Order By: Relevance
“…Furthermore, lateral radiographs are often not assessed, despite clear evidence that they contain clinically important information. 11 Deep-learning chest x-ray analysis systems have been developed to automate lung segmentation and bone exclusion; 12 diagnose tuberculosis; 13 detect pneumonia, 14,15 COVID-19, 16 pneumothorax, 17 pneumoconiosis, 18 and lung cancer; 19 identify the position of feeding tubes; 20 and to predict temporal changes in imaging findings. 21 Deeplearning diagnostic tools have also been shown to improve the classification accuracy of radiologists in the detection of pulmonary nodules, 22 pneumoconiosis, 18 pneumonia, 14,15 emphysema, 7 and pleural effusion.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, lateral radiographs are often not assessed, despite clear evidence that they contain clinically important information. 11 Deep-learning chest x-ray analysis systems have been developed to automate lung segmentation and bone exclusion; 12 diagnose tuberculosis; 13 detect pneumonia, 14,15 COVID-19, 16 pneumothorax, 17 pneumoconiosis, 18 and lung cancer; 19 identify the position of feeding tubes; 20 and to predict temporal changes in imaging findings. 21 Deeplearning diagnostic tools have also been shown to improve the classification accuracy of radiologists in the detection of pulmonary nodules, 22 pneumoconiosis, 18 pneumonia, 14,15 emphysema, 7 and pleural effusion.…”
Section: Introductionmentioning
confidence: 99%
“…The development of AI systems for object detection and recognition in X-rays is an emerging field, and many such studies have used transfer learning. Unlike disease diagnosis, the use of AI for object detection focuses mainly on the therapeutic tube and catheter [20][21][22]. In a similar study on the placement of feeding tubes, pre-trained deep CNN models (Inception V3, ResNet50, and DenseNet121) achieved AUC values of 0.82-0.87, which is far lower than that of the models developed for disease diagnosis [22].…”
Section: Discussionmentioning
confidence: 97%
“…However, there is no consensus among studies concerning the global optimum configuration for fine-tuning. [24] concluded that fine-tuning the last fully connected layers of Inception3, ResNet50, and DenseNet121 outperformed fine-tuning from scratch in all cases with AUC values ranging from 0.82 to 0.85. On the other hand, Yu et al [25] found that retraining from scratch of DenseNet201 achieved the highest diagnostic accuracy of 92.73%.…”
Section: Discussionmentioning
confidence: 97%