“…After that, we have used a pre-trained model by PyTorch [17] which had 135,310,918 total parameters out of which 1,050,374 are trainable while 134,260,544 are non-trainable parameters. The batch size was set to 128, Learning rate to 0.005, dropout rate to 0.4 [18] and the output size is 3. The summary of our VGG-16 model can be seen in the figure (Figure 7) and the architecture diagram of the VGG-16 model is displayed in (Figure 8).…”