Image stitching is a crucial aspect of image processing. However, factors like perspective and environment often lead to irregular shapes in stitched images. Cropping or completion methods typically result in substantial loss of information. This paper proposes a method for rectifying irregularly images into rectangles using deformable meshes and residual networks. The method utilizes a convolutional neural network to quantify rigid structures of images. Choosing the most suitable mesh structure based on the extraction results, offering options such as triangular, rectangular, and hexagonal. Subsequently, the irregularly image, predefined mesh structure, and predicted mesh structure are input into a wide residual neural network for regression. The loss function comprises local and global, aimed at minimizing the loss of image information within the mesh and global target. This method not only significantly reduces information loss during rectification but also adapting to different images with various rigid structures. Validation on the DIR-D dataset shows this method outperforms state-of-the-art methods in image rectification.