In the leading/trailing edge’s adaptive machining of the near-net-shaped blade, a small portion of the theoretical part, called the reconstruction area, is retained for securing aerodynamic performance by manual work. The next work is to recognize the reconstruction area of the reconstructed leading/trailing edge’s image. To accelerate this process, an anchor-free neural network model based on Transformer was proposed, named Leading/trailing Edge Transformer (LETR). LETR extracts image features from an aspect of mixed frequency and channel domain. We also integrated LETR with the newest meta-Acon activation function. We tested our model on the self-made dataset LDEG2021 on a single GPU and got an mAP of 91.9%, which surpassed our baseline model, Deformable DETR, by 1.1%. Furthermore, we modified LETR’s convolution layer and named the new model after Ghost Leading/trailing Edge Transformer (GLETR) as a lightweight model for real-time detection. It is proved that GLETR has fewer weight parameters and converges faster than LETR with an acceptable decrease in mAP (0.1%) by test results. The proposed models provide the basis for subsequent parameter extraction work in the reconstruction area.