Objective. One of the essential technologies in various image-guided spine surgeries is the rigid registration of 3D pre- CT and 2D intra-operative X-ray images. The 3D/2D registration is patterned as two essential tasks, that is, dimensional correspondence establishment and estimation of the 3D pose. 3D data is projected to 2D for dimensional correspondence by most of the existing methods, which makes pose parameters difficult to estimate caused by the loss of spatial information. This work aims to develop a reconstruction based 3D/2D registration method for spine surgery navigation. Approach. A novel segmentation-guided 3D/2D registration (SGReg) method for orthogonal X-ray and CT images was proposed based on reconstruction. SGReg consists of a bi-path segmentation network and an inter-path multi-scale pose estimation module. The X-ray segmentation path in the bi-path segmentation network reconstructs 3D spatial information from 2D orthogonal X-ray images to segmentation maps; meanwhile, the CT segmentation path predicts segmentation maps from 3D CT images, thereby bringing the 3D/2D data into dimensional correspondence. In the inter-path multi-scale pose estimation module, the features from the two segmentation paths are integrated, and the pose parameters are directly regressed under the guidance of the coordinate information. Main Results. We evaluated SGReg using a public dataset CTSpine1k and compared the registration performance with other learning-based methods. SGReg achieved considerable improvement over other methods with great robustness . Significance. We have proposed an end-to-end 3D/2D registration framework named SGReg. Based on the idea of reconstruction, SGReg performs a unified framework between dimensional correspondence establishment and direct pose estimation in 3D space, showing significant potential in spine surgery navigation.