We consider Markov decision processes with uncertain transition probabilities and two optimization problems in this context: the finite horizon problem which asks to find an optimal policy for a finite number of transitions and the percentile optimization problem for a wide class of uncertain Markov decision processes which asks to find a policy with the optimal probability to reach a given reward objective. To the best of our knowledge, unlike other optimality criteria, the finite horizon problem has not been considered for the case of bounded-parameter Markov decision processes, and the percentile optimization problem has only been considered for very special cases. Unlike most problems in the Markov decision process research context, dynamic programming is not applicable, as the usual subdivision in independent subproblems in each state is not anymore possible. Justified by this observation, we establish NP-hardness results for these problems by showing appropriate reductions.