Structured light 3D reconstruction point clouds are highly susceptible to camera distortion, which hinders their ability to meet the requirements of high-precision measurements. We propose a point cloud distortion correction method based on the Hippopotamus Optimization algorithm optimized Back Propagation neural network (HO-BP). The method involves performing a least squares (LS) plane fitting on the actual point cloud corrected by OpenCV to obtain the Z coordinates on the same plane. Using the pinhole model, the X and Y coordinates of the fitted plane are then recovered, resulting in the target plane point cloud. The HO-BP model establishes a mapping relationship between the actual point cloud and the target point cloud, thereby achieving distortion correction. The correction was applied to standard sphere point clouds at multiple different positions. Compared with the original point clouds, the mean absolute error (MAE) and root mean square error (RMSE) of the LS radius of the sphere point clouds corrected by HO-BP decreased by 72.42% and 62.62%, respectively. This demonstrates that the corrected target point clouds are closer to the ideal point clouds. Compared with existing algorithms, HO-BP also showed superior performance in terms of MAE and RMSE.