To address the issues of low classification pickup efficiency and slow response time of parallel robots positioning methods based on machine vision, this paper proposes a deep learning hybrid method RP-YOLOX-DL (YOLOX-Deeplabv3+ method for robots positioning) for parallel robots to achieve accurate picking. Firstly, the RP-YOLOX lightweight network is used to complete target recognition classification and rough positioning. A new feature enhanced network called DS-PANet is proposed to optimize the original up-and-down sampling structure, and the computational efficiency is improved through the attention mechanism and deep convolution. The loss function in network evaluation is enhanced, and an emphasizing the target Binary CrossEntropy loss function (ETBCE) is proposed for the objective loss strategy. Secondly, the Deeplabv3+(DL) network is used and the pooling structure is improved to obtain rich multi-scale information by using different types of Atrous convolution. Then, the extracted semantic segmentation center coordinates are finely positioned, and by using a hybrid positioning strategy, the RP-YOLOX and DL network modules are combined to obtain the best positioning coordinates of the target. Finally, a hand-eye calibration operation was performed to convert the robot, the camera, and the conveyor belt to eye-to-hand coordinate computation. The experimental results indicate that the hybrid method achieves a pick-up rate of 92.56% and a response time of 2.357s, showing better performance than the traditional algorithms Faster-RCNN, YOLOv3, and YOLOv5. Meanwhile, the identification efficiency is increased by 2.41% compared to the YOLOX algorithm. These results verify the efficiency and robust adaptability of the mixed method. This study has a certain reference value for applying the deep learning method to robots positioning pick-up.