Even though Convolutional Neural Networks (CNNs) have greatly improved face-related algorithms, it is still difficult to keep both accuracy and efficiency in real-world applications. The most cutting-edge approaches use deeper networks to improve performance, but the increased computing complexity and number of parameters make them impractical for usage in mobile applications. To tackle these issues, this article presents a model for object detection that combines Deeplabv3+ with Swin transformer, which incorporates GLTB and Swin-Conv-Dspp (SCD). To start with, in order to lessen the impact of the hole phenomena and the loss of fine-grained data, we employ the SCD component, which is capable of efficiently extracting feature information from objects at various sizes. Secondly, in order to properly address the issue of challenging object recognition due to occlusion, the study builds a GLTB with a spatial pyramid pooling shuffle module. This module allows for the extraction of important detail information from the few noticeable pixels of the blocked objects. Crocodile search algorithm (CSA) enhances classification accuracy by properly selecting the model's fine-tuning. On a benchmark dataset known as WFLW, the study experimentally validates the suggested model. Compared to other light models, the experimental findings show that it delivers higher performance with significantly fewer parameters and reduced computing complexity.