In this paper alternative approaches for testing the unit root hypothesis in panel data are considered. First, a robust version of the Dickey-Fuller t-statistic under contemporaneous correlated errors is suggested. Second, the GLS t-statistic is considered, which is based on the t-statistic of the transformed model. The asymptotic power of both tests is compared against a sequence of local alternatives. To adjust for short-run serial correlation of the errors, we propose a pre-whitening procedure that yields a test statistic with a standard normal limiting distribution as N and T tends to infinity. The test procedure is further generalized to accommodate individual specific intercepts or linear time trends. From our Monte Carlo simulations it turns out that the robust OLS t-statistic performs well with respect to size and power, whereas the GLS t-statistic may suffer from severe size distortions in small and moderate sample sizes. The tests are applied to test for a unit root in real exchange rates.
Learning acoustic models directly from the raw waveform data with minimal processing is challenging. Current waveform-based models have generally used very few (∼2) convolutional layers, which might be insufficient for building high-level discriminative features. In this work, we propose very deep convolutional neural networks (CNNs) that directly use time-domain waveforms as inputs. Our CNNs, with up to 34 weight layers, are efficient to optimize over very long sequences (e.g., vector of size 32000), necessary for processing acoustic waveforms. This is achieved through batch normalization, residual learning, and a careful design of down-sampling in the initial layers. Our networks are fully convolutional, without the use of fully connected layers and dropout, to maximize representation learning. We use a large receptive field in the first convolutional layer to mimic bandpass filters, but very small receptive fields subsequently to control the model capacity. We demonstrate the performance gains with the deeper models. Our evaluation shows that the CNN with 18 weight layers outperform the CNN with 3 weight layers by over 15% in absolute accuracy for an environmental sound recognition task and matches the performance of models using log-mel features.
This paper considers various tests of the unit root hypothesis in panels where the cross-section dependence is due to common dynamic factors+ Three situations are studied+ First, the common factors and idiosyncratic components may both be non-stationary+ In this case test statistics based on generalized least squares~GLS! possess a standard normal limiting distribution, whereas test statistics based on ordinary least squares~OLS! are invalid+ Second, if the common component is I~1! and the idiosyncratic component is stationary~the case of cross-unit cointegration!, then both the OLS and the GLS statistics fail+ Finally, if the idiosyncratic components are I~1! but the common factors are stationary, then the OLS-based test statistics are severely biased, whereas the GLS-based test statistics are asymptotically valid in this situation+ A Monte Carlo study is conducted to verify the asymptotic results+
Environmental sound detection is a challenging application of machine learning because of the noisy nature of the signal, and the small amount of (labeled) data that is typically available. This work thus presents a comparison of several state-of-the-art Deep Learning models on the IEEE challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 challenge task and data, classifying sounds into one of fifteen common indoor and outdoor acoustic scenes, such as bus, cafe, car, city center, forest path, library, train, etc. In total, 13 hours of stereo audio recordings are available, making this one of the largest datasets available.We perform experiments on six sets of features, including standard Mel-frequency cepstral coefficients (MFCC), Binaural MFCC, log Mel-spectrum and two different largescale temporal pooling features extracted using OpenSMILE. On these features, we apply five models: Gaussian Mixture Model (GMM), Deep Neural Network (DNN), Recurrent Neural Network (RNN), Convolutional Deep Neural Network (CNN) and i-vector. Using the late-fusion approach, we improve the performance of the baseline 72.5% by 15.6% in 4-fold Cross Validation (CV) avg. accuracy and 11% in test accuracy, which matches the best result of the DCASE 2016 challenge.With large feature sets, deep neural network models outperform traditional methods and achieve the best performance among all the studied methods. Consistent with other work, the best performing single model is the non-temporal DNN model, which we take as evidence that sounds in the DCASE challenge do not exhibit strong temporal dynamics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.