We develop a theory for continuous-time non-Markovian stochastic control problems which are inherently time-inconsistent. Their distinguishing feature is that the classical Bellman optimality principle no longer holds. Our formulation is cast within the framework of a controlled non-Markovian forward stochastic differential equation, and a general objective functional setting. We adopt a game-theoretic approach to study such problems, meaning that we seek for sub-game perfect Nash equilibrium points. As a first novelty of this work, we introduce and motivate a new definition of equilibrium that allows us to establish rigorously an extended dynamic programming principle, in the same spirit as in the classical theory. This in turn allows us to introduce a system of backward stochastic differential equations analogous to the classical HJB equation. We prove that this system is fundamental, in the sense that its well-posedness is both necessary and sufficient to characterise the value function and equilibria. As a final step we provide an existence and uniqueness result. Some examples and extensions of our results are also presented.