In computer-aided diagnostic technologies, deep convolutional neural image compression classifications are a crucial method. Conventional methods rely primarily on form, colouring, or feature descriptors, and also their configurations, the majority of which would be problem-specific that has been depicted to be supplementary in image data, resulting in a framework that cannot symbolize high problem entities and has poor prototype generalization capability. Emerging Deep Learning (DL) techniques have made it possible to build an end-to-end model, which could potentially general the last detection framework from the raw clinical image dataset. DL methods, on the other hand, suffer from the high computing constraints and costs in analytical modelling and streams owing to the increased mode of accuracy of clinical images and minimal sizes of data. To effectively mitigate these concerns, we provide a techniques and paradigm for DL that blends high-level characteristics generated from a deep network with some classical features in this research. The following stages are involved in constructing the suggested model: Firstly, we supervisedly train a DL model as a coding system, and as a consequence, it could convert raw pixels of medical images into feature extraction, which possibly reflect high-level ideologies for image categorization. Secondly, using image data background information, we derive a collection of conventional characteristics. Lastly, to combine the multiple feature groups produced during the first and second phases, we develop an appropriate method based on deep neural networks. Reference medical imaging datasets are used to assess the suggested method. We get total categorization reliability of 90.1 percent and 90.2 percent, which is greater than existing effective approaches.