This paper aims to present a comprehensive review of advanced techniques and models with a specific focus on deep neural network (DNN) for resource-constrained environments (RCE). The paper contributes by highlighting the RCE devices, analyzing challenges, reviewing a broad range of optimization techniques and DNN models, and offering a comparative assessment. The findings provide potential optimization techniques and recommend a baseline model for future development. It encompasses a broad range of DNN optimization techniques, including network pruning, weight quantization, knowledge distillation, depthwise separable convolution, residual connections, factorization, dense connections, and compound scaling. Moreover, the review analyzes the established optimization models which utilizes the above optimization techniques. A comprehensive analysis is conducted for each technique and model, considering its specific attributes, usability, strengths, and limitations in the context of effective deployment in RCEs. The review also presents a comparative assessment of advanced DNN models’ deployment for image classification, employing key evaluation metrics such as accuracy and efficiency factors like memory and inference time. The article concludes with the finding that combining depthwise separable convolution, weight quantization, and pruning represents potential optimization techniques, while also recommending EfficientNetB1 as a baseline model for the future development of optimization models in RCE image classification.<p> </p>