Recurrent neural networks (RNNs) can be used to operate over sequences of vectors and have been successfully applied to a variety of problems. However, it is hard to use RNNs to model the variable dwell time of the hidden state underlying an input sequence. In this article, we interpret the typical RNNs, including original RNN, standard long short-term memory (LSTM), peephole LSTM, projected LSTM, and gated recurrent unit (GRU), using a slightly extended hidden Markov model (HMM). Based on this interpretation, we are motivated to propose a novel RNN, called explicit duration recurrent network (EDRN), analog to a hidden semi-Markov model (HSMM). It has a better performance than conventional LSTMs and can explicitly model any duration distribution function of the hidden state. The model parameters become interpretable and can be used to infer many other quantities that the conventional RNNs cannot obtain. Therefore, EDRN is expected to extend and enrich the applications of RNNs. The interpretation also suggests that the conventional RNNs, including LSTM and GRU, can be made small modifications to improve their performance without increasing the parameters of the networks.