To enhance the segmentation accuracy of car front face elements such as headlights and grilles for car front face design, and to improve the superiority and efficiency of solutions in automotive partial modification design, this paper introduces MD-TransUNet, a semantic segmentation network based on the TransUNet model. MD-TransUNet integrates multi-scale attention gates and dynamic-channel graph convolution networks to enhance image restoration across various design drawings. To improve accuracy and detail retention in segmenting automotive front face elements, dynamic-channel graph convolution networks model global channel relationships between contextual sequences, thereby enhancing the Transformer’s channel encoding capabilities. Additionally, a multi-scale attention-based decoder structure is employed to restore feature map dimensions, mitigating the loss of detail in the local feature encoding by the Transformer. Experimental results demonstrate that the MSAG module significantly enhances the model’s ability to capture details, while the DCGCN module improves the segmentation accuracy of the shapes and edges of headlights and grilles. The MD-TransUNet model outperforms existing models on the automotive front face dataset, achieving mF-score, mIoU, and OA metrics of 95.81%, 92.08%, and 98.86%, respectively. Consequently, the MD-TransUNet model increases the precision of automotive front face element segmentation and achieves a more advanced and efficient approach to partial modification design.