This article presents a method for online learning of robot navigation a↵or-dances from spatiotemporally correlated haptic and depth cues. The method allows the robot to incrementally learn which objects present in the environment are actually traversable. This is a critical requirement for any wheeled robot performing in natural environments, in which the inability to discern vegetation from non-traversable obstacles frequently hampers terrain progression. A wheeled robot prototype was developed in order to experimentally validate the proposed method. The robot prototype obtains haptic and depth sensory feedback from a pan-tilt telescopic antenna and from a structured light sensor, respectively. With the presented method, the robot learns a mapping between objects' descriptors, given the range data provided by the sensor, and objects' sti↵ness, as estimated from the interaction between the antenna and the object. Learning confidence estimation is considered in order to progressively reduce the number of required physical interactions with acquainted objects. To raise the number of meaningful interactions per object under time pressure, the several segments of the object under analysis are prioritised according to a set of morphological criteria. Field trials show the ability of the robot to progressively learn which elements of the environment are traversable.