Deep Neural Networks (DNNs) have achieved impressive performance in various image recognition tasks, but their large model sizes make them challenging to deploy on resource-constrained devices. In this paper, we propose a dynamic DNN pruning approach that takes into account the difficulty of the incoming images during inference. To evaluate the effectiveness of our method, we conducted experiments on the ImageNet dataset on several state-of-art DNNs. Our results show that the proposed approach reduces the model size and amount of DNN operations without the need to retrain or fine-tune the pruned model. Overall, our method provides a promising direction for designing efficient frameworks for lightweight DNN models that can adapt to the varying complexity of input images.