In mobile robotics, navigation is considered as one of the most primary tasks, which becomes more challenging during local navigation when the environment is unknown. Therefore, the robot has to explore utilizing the sensory information. Reinforcement learning (RL), a biologicallyinspired learning paradigm, has caught the attention of many as it has the capability to learn autonomously in an unknown environment. However, the randomized behavior of exploration, common in RL, increases computation time and cost, hence making it less appealing for real-world scenarios. This paper proposes an informed-biased softmax regression (iBSR) learning process that introduce a heuristic-based cost function to ensure faster convergence. Here, the action-selection is not considered as a random process, rather, is based on the maximum probability function calculated using softmax regression. Through experimental simulation scenarios for navigation, the strength of the proposed approach is tested and, for comparison and analysis purposes, the iBSR learning process is evaluated against two benchmark algorithms.