In this paper, deep belief learning network architecture (DBL) is proposed for medical image classification in a bid to improve the diagnostics of dermal melanoma as an alternative to traditional dermoscopy. Preprocessing was carried out by using a linear Gaussian filter by eliminating high-frequency artifacts and distortion. The
K
-means segmentation technique was used to extract the region of interest. The DBL network was then applied to the segmented image for classification. The DBL architecture disperses the weights and hyperparameters to all positions in an image, making it possible to scale to various image sizes. The effects of overfitting were mitigated for small datasets and were achieved by optimizing the proposed network. The algorithm works effectively by fine-tuning constraints. The results showed an increase in the accuracy between the proposed model and AlexNet and LeeNet for segmented images from 8% to 47%, respectively. Similarly, an increase for nonsegmented images was observed between 2% and 48%. An average reduction of 47.8% and 41.5% in error for both segmented and nonsegmented images was recorded for dermal images. The execution time also decreased in comparison with the other architectures averaged by 8-13%, since the weights were distributed only on the clustered regions in the segmented image, as compared to the whole image thus allowing the network to classify it faster with improved accuracy.