The development of the mobile Internet and the success of deep learning in many applications have driven the need to deploy and apply deep learning models on mobile devices under the condition of limited resources. Long Short-Term Memory (LSTM), as a special scheme in deep learning, can learn long-distance dependencies hidden in time series. However, the high computational complexity of LSTM-related structures and the need for a large number of resources for training have become obstacles to their deployment on mobile devices. In order to reduce the resource requirements and computational costs of LSTMs, we use pruning strategies to preserve important connections during the training phase. After training, we reduce the complexity of LSTMs network by sharing weight strategy.Based on these strategies, we propose a sparse connected LSTM with a sharing weight (SCLSTM) model. The experimental results on the real data sets show that SCLSTM with 0.88% neural connections can obtain prediction capabilities comparable to densely connected LSTM. Moreover, SCLSTM can solve the problem of overfitting to some extent. The results of experiments demonstrate that SCLSTM can perform better than the-state-of-arts algorithm on mobile devices of limited resources. INDEX TERMS Deep learning, LSTM, time series, pruning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.