This paper solves the problem of computing the maximum (and minimum) probability to reach a set of goal states within a given time bound in continuous-time Markov decision processes (CTMDPs). For the subclass of locally uniform CTMDPs, we define the class of late total time positional schedulers (TTPD l ) and prove that they suffice to resolve all nondeterministic choices in an optimal way. Our main contribution is a discretization technique which, for an a priori given error bound ε > 0, induces a discrete-time MDP that approximates the maximum time-bounded reachability probability in the underlying locally uniform CTMDP up to ε.In a second part, we consider arbitrary CTMDPs. In this more general setting, TTPD l schedulers are inapplicable and are replaced by the corresponding class of early TTPD schedulers. Using a measure preserving transformation from CTMDPs to interactive Markov chains (IMCs), we apply recent results on IMCs to compute the maximum time-bounded reachability probability under early scheduler in the CTMDP's induced IMC.