The safe and efficient navigation of mobile robots in the presence of unknown dynamic obstacles remains a complex and unresolved challenge. This paper presents collision-free path planning for a mobile robot that safely deals with multi-directional obstacles, that is, randomly moving dynamic obstacles, using a Deep Reinforcement Learning (DRL) algorithm named Deep Q-Network (DQN) with inflated robot reward functions. The robot moves in a time-efficient and collision-free route while maintaining a safe distance with both static and unpredictable dynamic obstacles. The modified DQN algorithm takes RGB images of the environment as input for training a Convolution Neural Network (CNN) and provides a safe and short path for navigation. The robot used for training is an omni-wheeled mobile robot exploring outdoors, that is, concourse environment, and indoors, that is, home environment. The Closed-Loop Inverse Kinematics (CLIK) algorithm is employed to control a mobile robot to follow the desired path. The simulation results indicate that the proposed algorithm with inflated robot reward functions demonstrates remarkable performance as compared to recently used Reinforcement Learning (RL) algorithms when dealing with both stationary and randomly moving obstacles in the given environment.