This paper deals with unconstrained discounted continuous-time Markov decision processes in Borel state and action spaces. Under some conditions imposed on the primitives, allowing unbounded transition rates and unbounded (from both above and below) cost rates, we show the regularity of the controlled process, which ensures the underlying models to be well defined. Then we develop the dynamic programming approach by showing that the Bellman equation is satisfied (by the optimal value). Finally, under some compactness-continuity conditions, we obtain the existence of a deterministic stationary optimal policy out of the class of randomized history-dependent policies. 1 It is a standard practice to use "CTMDPs" and "CTMDPs optimization problems" interchangeably.CTMDPs allowing transition rates to be not uniformly bounded. However, the conditions assumed therein are difficult for verifications, as some of them are not directly imposed on the primitives but on the transition probability functions. Later on, there have been developments in the direction of only imposing conditions on the primitives, while still allowing unbounded transition rates, see [8,25] and the relevant chapters in the monograph [9]. It should be noted that all of the aforementioned works allowing unbounded transition rates are restricted to the class of randomized Markov policies. As a fact of matter, according to [7], the study of CTMDPs with the combination of randomized history-dependent policies and unbounded transition rates had been an over thirty year-old open problem. To our best knowledge, the first successful treatment for such CTMDPs is given by [10], where the state space is countable.In the present paper, we consider a more general case by allowing randomized history-dependent policies, unbounded transition rates and Borel state and action spaces into consideration, while all our conditions are imposed on the primitives. The cost rates being allowed to be unbouned (both from below and above) are more general than those considered in [4,5,6,7,8,9,10] and many others, too.The main contributions of the present paper are triple-folded. Under the imposed conditions on the primitives, we firstly show the regularity of the controlled process under any given randomized history-dependent policy, which allows a formal optimization problem statement. Then we develop the dynamic programming approach, by showing that the optimal value of the problem satisfies the corresponding Bellman equation. Finally, we establish the existence of a deterministic stationary optimal policy. In relation to the most recent literature on this topic, the present work refines [8] by considering randomized history-dependent policies 2 , and extends [10] to the case of Borel state spaces and more general cost rates.The rest of this paper is organized as follows. In Section 2, we briefly describe Kitaev's construction for CTMDPs, and present some preliminary results including the regularity, Kolmogorov's forward equations and Dynkin's formula for the controlled processe...