Traditionally, convolutional neural networks need large amounts of data labelled by humans to train. Self supervision has been proposed as a method of dealing with small amounts of labelled data. The aim of this study is to determine whether self supervision can increase classification performance on a small COVID-19 CT scan dataset. This study also aims to determine whether the proposed self supervision strategy, targeted self supervision, is a viable option for a COVID-19 imaging dataset. A total of 10 experiments are run comparing the classification performance of the proposed method of self supervision with different amounts of data. The experiments run with the proposed self supervision strategy perform significantly better than their non-self supervised counterparts. We get almost 8% increase in accuracy with full self supervision when compared to no self supervision. The results suggest that self supervision can improve classification performance on a small COVID-19 CT scan dataset. Code for targeted self supervision can be found at this link: https://github.com/Mewtwo/Targeted-Self-Supervision/tree/main/COVID-CT
Neural networks often require large amounts of expert annotated data to train. When changes are made in the process of medical imaging, trained networks may not perform as well, and obtaining large amounts of expert annotations for each change in the imaging process can be time consuming and expensive. Online unsupervised learning is a method that has been proposed to deal with situations where there is a domain shift in incoming data, and a lack of annotations. The aim of this study is to see whether online unsupervised learning can help COVID-19 CT scan classification models adjust to slight domain shifts, when there are no annotations available for the new data. A total of six experiments are performed using three test datasets with differing amounts of domain shift. These experiments compare the performance of the online unsupervised learning strategy to a baseline, as well as comparing how the strategy performs on different domain shifts. Code for online unsupervised learning can be found at this link: https://github.com/Mewtwo/online-unsupervisedlearning
<p>Traditionally, convolutional neural networks need large amounts of data labelled by humans to train. Self supervision has been proposed as a method of dealing with small amounts of labelled data. The aim of this study is to determine whether self supervision can increase classification performance on a small COVID-19 CT scan dataset. This study also aims to determine whether the proposed self supervision strategy, targeted self supervision, is a viable option for a COVID-19 imaging dataset. A total of 10 experiments are run comparing the classification performance of the proposed method of self supervision with different amounts of data. The experiments run with the proposed self supervision strategy perform significantly better than their non-self supervised counterparts. We get almost 8% increase in accuracy with full self supervision when compared to no self supervision. The results suggest that self supervision can improve classification performance on a small COVID-19 CT scan dataset. Code for targeted self supervision can be found at this link: <a href="https://github.com/Mewtwo/Targeted-Self-Supervision/tree/main/COVID-CT" target="_blank">this https URL</a> </p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.