We address two central notions of fairness in the literature of nondeterministic fully observable domains. The first, which we call stochastic fairness, is classical, and assumes an environment which operates probabilistically using possibly unknown probabilities. The second, which is language-theoretic, assumes that if an action is taken from a given state infinitely often then all its possible outcomes should appear infinitely often; we call this state-action fairness. While the two notions coincide for standard reachability goals, they differ for temporally extended goals. This important difference has been overlooked in the planning literature and has led to the use of a product-based reduction in a number of published algorithms which were stated for state-action fairness, for which they are incorrect, while being correct for stochastic fairness. We remedy this and provide a correct optimal algorithm for solving state-action fair planning for ltl/ltlf goals, as well as a correct proof of the lower bound of the goal-complexity. Our proof is general enough that it also provides, for the no-fairness and stochastic-fairness cases, multiple missing lower bounds and new proofs of known lower bounds. Overall, we show that stochastic fairness is better behaved than state-action fairness.