Object positioning is a basic need for visual robotics in automatic assembly lines. An assembly line requires fast transfer to new object positioning tasks with few or no training data for deep learning algorithms, and the captured visual images usually suffer from partial missing and cropping and environmental lighting interference. These features call for efficient and robust arbitrary shape positioning algorithms under data-scarce and shape distortion cases. To this end, this paper proposes the Random Verify Generalised Hough Transform (RV-GHT). The RV-GHT builds a much more concise shape dictionary than traditional GHT methods with just a single training image. The location, orientation, and scaling of multiple target objects are given simultaneously during positioning. Experiments were carried out on a dataset in an automatic assembly line with real shape distortions, and the performance was improved greatly compared to the state-of-the art methods. Although the RV-GHT was initially designed for vision robotics in an automatic assembly line, it works for other object positioning mechatronics systems, which can be modelled as shape distortion on a standard reference object.