In weed control, precision agriculture can help to greatly reduce the use of herbicides, resulting in both economical and ecological benefits. A key element here is the ability to locate and segment all the plants (crop & weed) from image data. Modern instance segmentation techniques can achieve this, however, training such systems requires large amounts of handlabelled data which is expensive and laborious to obtain. Weakly supervised training can help to greatly reduce labelling efforts and costs.In this paper we propose panoptic one-click segmentation, an efficient and accurate offline tool to produce pseudo-labels from click inputs and thereby reduce labelling effort when creating novel datasets. Our approach jointly estimates the pixel-wise location of all N objects in the scene, compared to traditional approaches which iterate independently through all N objects. This results in a highly efficient technique with greatly reduced training times. Using just 10% of the data to train our panoptic one-click segmentation approach yields 68.1% and 68.8% mean object intersection over union (IoU) on challenging sugar beet and corn image data respectively, providing comparable performance to traditional one-click approaches while being approximately 12 times (an order of magnitude) faster to train. We demonstrate the practical applicability of our system by generating pseudo-labels from click annotations for the remaining 90% of the data. These pseudo-labels are then used to train Mask R-CNN, in a semisupervised manner, improving the absolute performance (of mean foreground IoU) by 9.4 and 7.9 points for sugar beet and corn data respectively, demonstrating the potential of our approach to rapidly annotate challenging data. Finally, we show that our panoptic one-click segmentation technique is able to recover missed clicks during annotation outlining a further benefit over traditional approaches.