Robotic manipulation with a nontrivial object providing various types of grasping points is of an industrial interest. Here, an efficient method of simultaneous detection of the grasping points is proposed. Specifically, two different 3 degree-of-freedom end effectors are considered for simultaneous grasping. The method utilizes an RGB data-driven perception system based on a specifically designed fully convolutional neural network called attention squeeze parallel U-Net (ASP U-Net). ASP U-Net detects grasping points based on a single RGB image. This image is transformed into a schematic grayscale frame, where the positions and poses of the grasping points are coded into gradient geometric shapes. In order to approve the ASP U-Net architecture, its performance was compared with nine competitive architectures using metrics based on generalized intersection over union and mean absolute error. The results indicate its outstanding accuracy and response time. ASP U-Net is also computationally efficient enough. With a more than acceptable memory size (77 MB), the architecture can be implemented using custom single-board computers. Here, its capabilities were tested and evaluated on the NVIDIA Jetson NANO platform.INDEX TERMS Robotic grasping, grasping point detection, machine vision, deep learning, convolutional neural network.