Abstract-To augment traditionally vision-based body schema learning with a sensory channel that provides more accurate positional information, we propose a tactile-servoing feedback controller that allows a robot to continuously acquire self-touch information while sliding a fingertip across its own body. In this manner one can quickly acquire a large amount of training data representing the body shape.We compare three approaches to track the common contact point observed when one robot arm is touching the other in a bimanual setup: feed-forward control, solely relying on a CAD-based kinematics, performs worst; a controller that is only based on tactile feedback typically lacks behind; only the combination of both approaches yields satisfactory results.As a first, preliminary application, we use the self-touch capability to calibrate the closed kinematic chain formed by both arms touching each other. The obtained homogeneous transform describing the relative mounting pose of both arms, improves end-effector position estimations by a magnitude compared to a traditional, vision-based approach.