Due to the high arithmetic complexity and scalability challenges of deep learning, there is a critical need to shift research focus towards energy efficiency. Tsetlin Machines (TMs) are a recent approach to machine learning (ML) that has demonstrated significantly reduced energy compared to neural networks alike, while providing comparable accuracy on several benchmarks. However, TMs rely heavily on energy‐costly random number generation to stochastically guide a team of Tsetlin Automata (TA) in TM learning. In this paper, we propose a novel finite‐state learning automaton that can replace the TA in the TM, for increased determinism. The new automaton uses multi‐step deterministic state jumps to reinforce sub‐patterns, without resorting to randomization. A determinism parameter d finely controls trading off the energy consumption of random number generation, against randomization for increased accuracy. Randomization is controlled by flipping a coin before every d'th state jump, ignoring the state jump on tails. For example, d=1 makes every update random and d=∞ makes the automaton completely deterministic. Both theoretically and empirically, we establish that the proposed automaton converges to the optimal action almost surely. Further, used together with the TM, only substantial degrees of determinism reduce accuracy. Energy‐wise, random number generation constitutes switching energy consumption of the TM, saving up to 11 mW power for larger datasets with high d values. Our new learning automaton approach thus facilitates low‐energy ML.