his article is about recognizing handheld objects from incomplete tactile observations with a classifier trained on only visual representations. Our method is based on the deep learning (DL) architecture PointNet and a curriculum learning (CL) technique for fostering the learning of descriptors robust to partial representations of objects. The learning procedure gradually decomposes the visual point clouds to synthesize sparser and sparser input data for the model. In this manner, we were able to employ one-shot learning, using the decomposed visual point clouds as augmentations, and reduce the data-collection requirement for training. The approach allows for a gradual improvement of prediction accuracy as more tactile data become available.We evaluated the effectiveness of the curriculum strategy on our generated visual and tactile datasets, experimentally showing that the proposed method improved recognition accuracy by up to 23% on partial tactile data and boosted accuracy on full tactile data from 93 to 100%. The curriculum-trained network recognized objects with an accuracy of 80% using only 20% of the tactile data representing the objects, increasing to 100% accuracy on clouds containing at least 60% of the points.