Background: Semisupervised strategy has been utilized to alleviate issues from segmentation applications due to challenges in collecting abundant annotated segmentation masks, which is an essential prerequisite for training high-performance 3D convolutional neural networks (CNNs) . Purpose: Existing semisupervised segmentation methods are mainly concerned with how to generate the pseudo labels with regularization but not evaluate the quality of the pseudo labels explicitly. To alleviate this problem, we offer a simple yet effective reciprocal learning strategy for semisupervised volumetric medical image segmentation, which generates more reliable pseudo labels for the unannotated data. Methods: Our proposed reciprocal learning is achieved through a pair of networks,one as a teacher network and the other as a student network.The student network learns from pseudo labels generated by the teacher network. In addition, the teacher network autonomously optimizes its parameters based on the reciprocal feedback signals from the student's performance on the annotated images. The efficacy of the proposed method is evaluated on three medical image data sets, including 82 pancreas computed tomography (CT) scans (training/testing: 62/20), 100 left atrium gadolinium-enhanced magnetic resonance (MR) scans (training/testing: 80/20), and 200 breast cancer MR scans (training/testing: 68/132). The comparison methods include mean teacher (MT) model, uncertainty-aware MT (UA-MT) model, shape-aware adversarial network (SASSNet), and transformation-consistent self -ensembling model (TCSM). The evaluation metrics are Dice similarity coefficient (Dice), Jaccard index (Jaccard), 95% Hausdorff distance (95HD), and average surface distance (ASD). The Wilcoxon signed-rank test is used to conduct the statistical analyses. Results: By utilizing 20% labeled data and 80% unlabeled data for training, our proposed method achieves an average Dice of 84.77%/90.46%/78.53%, Jaccard of 73.71%/82.67%/69.00%, ASD of 1.58/1.90/0.57, and 95HD of 6.24/5.97/4.34 on pancreas/left atrium/breast data sets, respectively. These results outperform several cutting-edge semisupervised approaches, showing