The most critical aspect of panorama generation is maintaining local semantic consistency. Objects may be projected from different depths in the captured image. When warping the image to a unified canvas, pixels at the semantic boundaries of the different views are significantly misaligned. We propose two lightweight strategies to address this challenge efficiently. First, the original image is segmented as superpixels rather than regular grids to preserve the structure of each cell. We propose effective cost functions to generate the warp matrix for each superpixel. The warp matrix varies progressively for smooth projection, which contributes to a more faithful reconstruction of object structures. Second, to deal with artifacts introduced by stitching, we use a seam line method tailored to superpixels. The algorithm takes into account the feature similarity of neighborhood superpixels, including color difference, structure and entropy. We also consider the semantic information to avoid semantic misalignment. The optimal solution constrained by the cost functions is obtained under a graph model. The resulting stitched images exhibit improved naturalness. Extensive testing on common panorama stitching datasets is performed on the algorithm. Experimental results show that the proposed algorithm effectively mitigates artifacts, preserves the completeness of semantics and produces panoramic images with a subjective quality that is superior to that of alternative methods.