The goal of this article is to study fundamental mechanisms behind so‐called indirect and direct data‐driven control for unknown systems. Specifically, we consider policy iteration applied to the linear quadratic regulator problem. Two iterative procedures, where data collected from the system are repeatedly used to compute new estimates of the desired optimal controller, are considered. In indirect policy iteration, data are used to obtain an updated model estimate through a recursive identification scheme, which is used in a certainty‐equivalent fashion to perform the classic policy iteration update. By casting the concurrent model identification and control design as a feedback interconnection between two algorithmic systems, we provide a closed‐loop analysis that shows convergence and robustness properties for arbitrary levels of excitation in the data. In direct policy iteration, data are used to approximate the value function and design the associated controller without requiring the intermediate identification step. After proposing an extension to a recently proposed scheme that overcomes potential identifiability issues, we establish under which conditions this procedure is guaranteed to deliver the optimal controller. Based on these analyses we are able to compare the strengths and limitations of the two approaches, highlighting aspects such as the required samples, convergence properties, and excitation requirement. Simulations are also provided to illustrate the results.