2015
DOI: 10.1007/978-3-319-09903-3_23
|View full text |Cite
|
Sign up to set email alerts
|

Stacked Denoising Auto-Encoders for Short-Term Time Series Forecasting

Abstract: Abstract. In this chapter, a study of deep learning of time-series forecasting techniques is presented. Using Stacked Denoising Auto-Encoders, it is possible to disentangle complex characteristics in time series data. The effects of complete and partial fine-tuning are shown. SDAE prove to be able to train deeper models, and consequently to learn more complex characteristics in the data. Hence, these models are able to generalize better. Pre-trained models show a better generalization when used without covaria… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
11
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 21 publications
1
11
0
1
Order By: Relevance
“…As we mentioned in this section, most of the previous work in unsupervised pre-training NN (or deep NNs) has focused on data compression 20 , dimensionality reduction 20,27 , classification 20,28 , and UTS forecasting 20 problems. Importantly, time series forecasting with deep learning techniques is an interesting research area that needs to be studied as well 19,26 . Moreover, even the recent time series forecasting research in the literature has focused on UTS problems.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…As we mentioned in this section, most of the previous work in unsupervised pre-training NN (or deep NNs) has focused on data compression 20 , dimensionality reduction 20,27 , classification 20,28 , and UTS forecasting 20 problems. Importantly, time series forecasting with deep learning techniques is an interesting research area that needs to be studied as well 19,26 . Moreover, even the recent time series forecasting research in the literature has focused on UTS problems.…”
Section: Related Workmentioning
confidence: 99%
“…The random initialization of a large numbers of neurons in such situations will lead the learning algorithm to converge to different local minima, depending on the values of the parameter initialization. Furthermore, and as a general practice, previous studies have demonstrated that training deep networks with several layers using random weights initialization and supervised training provide worse results than training shallow architectures 8,18,19 .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Digər tərəfdən, maşın təlimi metodları [6], o cümlədən dərin təlim yanaşması (ing. Deep Learning, DL) maliyyə zaman sıralarının proqnozlaşdırılmasında getdikcə böyük diqqət çəkir [7,8], çünki dərin neyron şəbəkələri əlamətləri avtomatik çıxarmaq imkanına malikdir, buna görə aprior analiz aparmağı və zaman sıralarının strukturunu əvvəlcədən bilməyi tələb etmir və qeyri-stasionar zaman sıralarına münasibətdə kifayət qədər etibarlıdırlar [9].…”
Section: Introductionunclassified
“…Specifically, this idea provides a better approach to (pre)train each layer in turn, initially using a local unsupervised criterion [36] with the aim of learning to produce useful higher-level representations from lower-level-representation output of the previous layer, which leads to much better solutions in terms of generalization performance. Due to such characteristics, DBNs and SDAs were successfully implemented in many nonlinear systems like dimensionality reduction [37][38][39], time-series forecasting [40][41][42], acoustic modeling [43][44][45], and digit recognition [46][47][48]. Therefore, we think the above-mentioned algorithms also have the potential to be applied in urban-sprawl simulations.…”
Section: Introductionmentioning
confidence: 99%