Deep sequential (DS) models are extensively employed for forecasting time series data since the dawn of the deep learning era, and they provide forecasts for the values required in subsequent time steps. DS models, unlike other traditional statistical models for forecasting time series data, can learn hidden patterns in temporal sequences and have the memorizing data from prior time points. Given the widespread usage of deep sequential models in several domains, a comprehensive study describing their applications is necessary. This work presents a comprehensive review of contemporary deep learning time series models, their performance in diverse domains, and an investigation of the models that were employed in various applications. Three deep sequential models, namely, artificial neural network (ANN), long short-term memory (LSTM), and temporal-conventional neural network (TCNN) along with their applications for forecasting time series data, are elaborated. We showed a comprehensive comparison between such models in terms of application fields, model structure and activation functions, optimizers, and implementation, with a goal of learning more about the optimal model used. Furthermore, the challenges and perspectives of future development of deep sequential models are presented and discussed. We conclude that the LSTM model is widely employed, particularly in the form of a hybrid model, in which the most accurate predictions are made when the shape of hybrids is used as the model.