Autonomous collision avoidance for visually impaired people requires a specific processing for an accurate definition of traversable area. Processing of a real time image sequence for traversable area segmentation is mandatory. Low cost systems suggest use of poor quality cameras. However, low cost camera suffers from great variability of traversable area appearance at indoor as well as outdoor environments. Taking into account ambiguity affecting object and traversable area appearance induced by reflections, illumination variations, occlusions (, etc...), an accurate segmentation of traversable area in such conditions remains a challenge. Moreover, at indoor and outdoor environments, more added variability is induced. In this paper, we present a fast traversable area segmentation approach from image sequence recorded by a low-cost monocular camera for navigation system. Taking into account all kinds of variability in the image, we apply possibility theory for modeling information ambiguity. An efficient way of updating the traversable area model in each environment condition is to consider traversable area samples from the same processed image for building its possibility maps. Then fusing these maps allows making a fair model definition of the traversable area. Performance of the proposed system was evaluated on public databases, at indoor and outdoor environments. Experimental results show that this method is challenging leading to higher segmentation rates.