This paper presents an empirical study on advanced Deep Neural Network (DNN) models, with a focus on identifying potential baseline models for efficient deployment in resource-constrained environments (RCE). The systematic evaluation encompasses ten state-of-the-art pre-trained DNN models: ResNet50, InceptionResNetV2, InceptionV3, MobileNet, MobileNetV2, EfficientNetB0, EfficientNetB1, EfficientNetB2, DenseNet121, and Xception, within the context of an RCE setting. Evaluation criteria, such as parameters (indicating model complexity), storage space (reflecting storage requirements), CPU usage time (for real-time applications), and accuracy (reflecting prediction truth), are considered through systematic experimental procedures. The results highlight MobileNet's excellent trade-off between accuracy and resource requirements, especially in terms of CPU and storage consumption, in experimental scenarios where image predictions are performed on an RCE device. Utilizing the identified baseline model, a new model, GRM-MobileNet, was developed by implementing compound scaling and global average pooling techniques. GRM-MobileNet exhibits a substantial reduction of 23.81% in parameters compared to MobileNet, leading to a model size that is 23.88% smaller. Moreover, GRM-MobileNet demonstrates a significant improvement in accuracy, achieving a remarkable gain of 28.12% over MobileNet. Although the enhancement in inference time for GRM-MobileNet compared to MobileNet is modest at 1.66%, the overall improvements underscore the effectiveness of the employed strategies in enhancing the model's performance. A future study will examine other model optimization strategies, including factorization and pruning, which ultimately lead to faster inference without compromising accuracy, in an effort to improve the efficiency of the GRM-MobileNet model and its inference time.