Deep neural network (DNN) based models are highly acclaimed in medical image classification. The existing DNN architectures are claimed to be at the forefront of image classification. These models require very large datasets to classify the images with a high level of accuracy. However, fail to perform when trained on datasets of small size. Low accuracy and overfitting are the problems observed when medical datasets of small sizes are used to train a classifier using deep learning models such as Convolutional Neural Networks (CNN). These existing methods and models either always overfit when training on these small datasets or will result in classification accuracy which tends towards randomness. This issue stands even when using Transfer Learning (TL), the current standard for such a scenario. In this paper, we have tested several models including ResNet and VGGs along with more modern models like MobileNets on different medical datasets with transfer learning and without transfer learning. We have proposed solid theories as to why there exists a need for a more novel approach to this issue, and how the current methodologies fail when applied to the aforementioned datasets. Larger, more complex models are not able to converge for smaller datasets. Smaller models with less complexity perform better on the same dataset than their larger model counterparts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.