Learning Automata (LA) are considered as one of the most powerful tools in the field of reinforcement learning. The family of estimator algorithms is proposed to improve the convergence rate of LA and has made great achievements. However, the estimators perform poorly on estimating the reward probabilities of actions in the initial stage of the learning process of LA. In this situation, a lot of rewards would be added to the probabilities of non-optimal actions. Thus, a large number of extra iterations are needed to compensate for these wrong rewards. In order to improve the speed of convergence, we propose a new P-model absorbing learning automaton by utilizing a double competitive strategy which is designed for updating the action probability vector. In this way, the wrong rewards can be corrected instantly. Hence, the proposed Double Competitive Algorithm overcomes the drawbacks of existing estimator algorithms. A refined analysis is presented to show the ǫ−optimality of the proposed scheme. The extensive experimental results in benchmark environments demonstrate that our proposed learning automata perform more efficiently than the most classic LA SE RI and the current fastest LA DGCP A * .