2021
DOI: 10.3390/ijerph182111086
|View full text |Cite
|
Sign up to set email alerts
|

COVID-Transformer: Interpretable COVID-19 Detection Using Vision Transformer for Healthcare

Abstract: In the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data scie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
32
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 98 publications
(46 citation statements)
references
References 37 publications
0
32
0
Order By: Relevance
“…Some new COVID-19 detection algorithm based on the ViT architecture has been proposed in a few research projects. Shome et al [28] built a dataset of 30,000 images and trained the ViT model on it. The trained model performed better than CNN, such as E cientNet-B0, Inception-V3, and ResNet-50 in a multi-classi cation challenge, with 92% accuracy and 98% AUC.…”
Section: Related Workmentioning
confidence: 99%
“…Some new COVID-19 detection algorithm based on the ViT architecture has been proposed in a few research projects. Shome et al [28] built a dataset of 30,000 images and trained the ViT model on it. The trained model performed better than CNN, such as E cientNet-B0, Inception-V3, and ResNet-50 in a multi-classi cation challenge, with 92% accuracy and 98% AUC.…”
Section: Related Workmentioning
confidence: 99%
“…Krishnan et al [17] and Park et al [39] utilize ViT-based models to achieve higher COVID-19 classification accuracy through CXR images. COVID-Transformer [48] and xViTCOS [33] have been proposed to further improve classification accuracy and focus on diagnosis-related regions. However, there is still much room for improvement to train ViT models in a small dataset, such as medical imaging dataset.…”
Section: Vision Transformermentioning
confidence: 99%
“…Recently, Vision Transformers (ViTs) ( Zhai et al, 2021 ) with built-in self-attention mechanisms have demonstrated comparable performance to CNNs in natural and medical visual recognition tasks, while requiring fewer computational resources. Several studies ( Liu and Yin, 2021 ; Shome et al, 2021 ; Park et al, 2022 ) used ViTs to improve pulmonary disease detection in frontal CXRs to detect manifestations consistent with COVID-19 disease. Another study ( Duong et al, 2021 ) used a ViT model to detect TB-consistent findings in frontal CXRs and obtained an accuracy of 97.72%.…”
Section: Introductionmentioning
confidence: 99%