Deep Neural Networks (DNNs) has been dominating recent data mining related tasks with better performances. This article proposes a hybrid approach that exploits the unique predictive power of DNN and classical time series regression models, including Generalized Linear Model (GLM), Seasonal AutoRegressive Integrated Moving Average model (SARIMA) and AutoRegressive Integrated Moving Average with eXplanatory variable (ARIMAX) method, in forecasting time series in reality. For each selected time series regression model, three different hybrid strategies are designed in order to merge its results with DNNs, namely, Zhang's method, Khashei's method, and moving average filter-based method. The real seasonal time series data of patient arrival volume in a Hong Kong A&ED center was collected for the period July 1, 2009, through June 30, 2011 and is used for comparing the forecast accuracy of proposed hybrid strategies. The mean absolute percentage error (MAPE) is set as the metric and the result indicates that all hybrid models achieved higher accuracy than original single models. Among 3 hybrid strategies, generally, Khashei's method and moving average filter-based method achieve lower MAPE than Zhang's method. Furthermore, the predicted value is an important prerequisite of conducting the rostering and scheduling in A&ED center, either in the simulation-based approach or in the mathematical programming approach. INDEX TERMS Data mining, deep neural networks, hybrid approach, time series regression, data mining.
Lin et al. propose the iterative Toeplitz filters algorithm as an alternative iterative algorithm for Empirical Mode Decomposition (EMD). In this alternative algorithm, the average of the upper and lower envelopes is replaced by certain "moving average" obtained through a low-pass filter. Performing the traditional sifting algorithm with such moving averages is equivalent to iterating certain convolution filters (finite length Toeplitz filters). This paper studies the convergence of this algorithm for signals of continuous variables, and proves that the limit function of this iterative algorithm is an ideal high-pass filtering process.
The definition of the graph Fourier transform is a fundamental issue in graph signal processing.Conventional graph Fourier transform is defined through the eigenvectors of the graph Laplacian matrix, which minimize the 2 norm signal variation. However, the computation of Laplacian eigenvectors is expensive when the graph is large. In this paper, we propose an alternative definition of graph Fourier transform based on the 1 norm variation minimization. We obtain a necessary condition satisfied by the 1 Fourier basis, and provide a fast greedy algorithm to approximate the 1 Fourier basis. Numerical experiments show the effectiveness of the greedy algorithm. Moreover, the Fourier transform under the greedy basis demonstrates a similar rate of decay to that of Laplacian basis for simulated or real signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.