Affordance features are being increasingly used for a number of robotic applications. An open affordance framework called AfNet defines over 250 objects in terms of 35 affordance features that are grounded in visual perception algorithms. While AfNet is intended for usage with cognitive visual recognition systems, an extension to the framework, called AfRob delivers an affordance based ontology targeted at robotic applications. Applications in which AfRob has been used include (a) top down task driven saliency detection (b) cognitive object recognition (c) task based object grasping and manipulation. In this paper, we use AfRob as base for building topological maps intended for robotic navigation. Traditional approaches to robotic navigation use metric maps or topological maps or hybrid systems that combine the two approaches at different levels of resolution or granularity. While metric and grid based maps provide high accuracy results for optimal path planning schemes, they require high space-time requirements for computation and storage, reducing real-time applicability. On the other hand, topological maps being graph based abstract structures are extremely light and convenient for goal driven navigation, but suffer from lack of resolution, poor self-localization and loop closing. Both approaches show severe restrictions in the case of dynamic environments in which objects which serve as features for the map building procedure are moved or removed from the scene across the time period of usage of the robot. This paper presents a novel approach to topological map building that takes into account affordance features that can help build lightweight, high-resolution, holistic and cognitive maps by predicting positional and functional characteristics of unseen objects. In addition, these features enable a cognitive approach to handling dynamic scene content, providing for enhanced loop closing and self-localization over traditional topological map building. These features also offer cues to place learning and functional room unit classification thereby providing for superior task based path planning. Since these features are easy to detect, fast building of maps is possible. Results on synthetic and real scenes demonstrate the benefits of the proposed approach.