The existing basketball training shooting direction correction method has the problems of low correction accuracy and poor self-adaptability, and proposes a basketball training shooting direction correction method based on visual perception. A visual localisation algorithm for tracking feature points of object targets is used as the basis for the process of visual robot localisation and its effects, from camera calibration, template matching, background modelling and foreground target separation to feature point extraction, motion estimation, and Kalman filtering. An in-depth understanding and analysis of the traditional corner point detection algorithm is presented, on the basis of which improvements are proposed. An accurate tracking method based on improved Harris corner point extraction is introduced, which builds on the traditional Harris feature point detection by using the changing relationship between the gradient of the grey value of the pixels near the corner point, using simple operations and analysis to exclude some pseudocorner points and noncorner points, and further processing the retained points to derive the correct feature points. The code of this algorithm is written to finally achieve its detection effect, and compared with the traditional algorithm, it is concluded that this algorithm can then extract more accurate corner points in a shorter time, which lays the foundation for the next step of accurate basketball tracking, reflecting the practicality of this algorithm.