This paper 1 deals with the Finite-Time Analysis (FTA) of Learning Automata (LA), which is a topic for which very little work has been reported in the literature. This is as opposed to the asymptotic steady-state analysis for which there are, probably, scores of papers. As clarified later, unarguably, the FTA of Markov Chains, in general, and of LA, in particular, is far more complex than the asymptotic steady-state analysis. Such a FTA provides rigid bounds for the time required for the LA to attain to a given convergence accuracy. We concentrate on the FTA of the Discretized Pursuit Automaton (DPA), which is probably one of the fastest and most accurate reported LA. Although such an analysis was carried out many years ago, we record that the previous work is flawed. More specifically, in all brevity, the flaw lies in the wrongly "derived" monotonic behavior of the LA after a certain number of iterations. Rather, we claim that the property that should be invoked is the submartingale property. This renders the proof to be much more involved and deep. In this paper, we rectify the flaw and re-establish the FTA based on such a submartingale phenomenon. More importantly, from the derived analysis, we are able to discover and clarify, for the first time, the underlying dilemma between the DPA's exploitation and exploration properties. We also non-trivially confirm the existence of the optimal learning rate, which yields a better comprehension of the DPA itself.