In this paper, which is a continuation of the discrete-time paper (Björk and Murgoci in Finance Stoch. 18:545-592, 2004), we study a class of continuoustime stochastic control problems which, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We study these problems within a game-theoretic framework, and we look for Nash subgame perfect equilibrium points. For a general controlled continuous-time Markov process and a fairly general objective functional, we derive an extension of the standard HamiltonJacobi-Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. The main theoretical result is a verification theorem. As an application of the general theory, we study a time-inconsistent linear-quadratic regulator. We also present a study of time-inconsistency within the framework of a general equilibrium production economy of Cox-Ingersoll-Ross type (Cox et al. in Econometrica 53:363-384, 1985