Abstract:It is shown that a strongly consistent estimation procedure for the order of an autoregression can be based on the law of the iterated logarithm for the partial autocorrelations. As compared to other strongly consistent procedures this procedure will underestimate the order to a lesser degree.
“…In our framework, this means that s belongs to some model S m with minimal dimension that we want to find: this is the consistency point of view. The following criteria have been designed to find it with probability tending to one when ε goes to zero (and the list of models remains fixed): BIC (Akaike, [5] or equivalently Schwarz, [35]) and Hannan and Quinn [20]. For a recent analysis of such criteria, see Guyon and Yao [19].…”
Section: Some Historical Remarks About Model Selectionmentioning
This paper is mainly devoted to a precise analysis of what kind of penalties should be used in order to perform model selection via the minimization of a penalized least-squares type criterion within some general Gaussian framework including the classical ones. As compared to our previous paper on this topic (Birgé and Massart in J. Eur. Math. Soc. 3, 203-268 (2001)), more elaborate forms of the penalties are given which are shown to be, in some sense, optimal. We indeed provide more precise upper bounds for the risk of the penalized estimators and lower bounds for the penalty terms, showing that the use of smaller penalties may lead to disastrous results. These lower bounds may also be used to design a practical strategy that allows to estimate the penalty from the data when the amount of noise is unknown. We provide an illustration of the method for the problem of estimating a piecewise constant signal in Gaussian noise when neither the number, nor the location of the change points are known.
“…In our framework, this means that s belongs to some model S m with minimal dimension that we want to find: this is the consistency point of view. The following criteria have been designed to find it with probability tending to one when ε goes to zero (and the list of models remains fixed): BIC (Akaike, [5] or equivalently Schwarz, [35]) and Hannan and Quinn [20]. For a recent analysis of such criteria, see Guyon and Yao [19].…”
Section: Some Historical Remarks About Model Selectionmentioning
This paper is mainly devoted to a precise analysis of what kind of penalties should be used in order to perform model selection via the minimization of a penalized least-squares type criterion within some general Gaussian framework including the classical ones. As compared to our previous paper on this topic (Birgé and Massart in J. Eur. Math. Soc. 3, 203-268 (2001)), more elaborate forms of the penalties are given which are shown to be, in some sense, optimal. We indeed provide more precise upper bounds for the risk of the penalized estimators and lower bounds for the penalty terms, showing that the use of smaller penalties may lead to disastrous results. These lower bounds may also be used to design a practical strategy that allows to estimate the penalty from the data when the amount of noise is unknown. We provide an illustration of the method for the problem of estimating a piecewise constant signal in Gaussian noise when neither the number, nor the location of the change points are known.
“…Our examples are misspecified regression models for univariate data Yt, 1 <_ t ~ N, which are estimated, given column vector regressors xt, Hannan and Quinn (1979), WN =-21oglogN. When two competing regressor processes x~ 1) and x~ 2) are being compared by (1.2), the one with the smaller criterion value is favored.…”
“…The Fig. 1 The discrete wavelet transform of a function and function + Gaussian white noise, respectively asymptotic properties and consistency of BIC are well known and have been described extensively in the literature (Hannan and Quinn 1979;Haughton 1988). The optimum number of clusters minimizes the quantity…”
Section: Cem Algorithm (Classification Expectation Aximization)mentioning
The number of studies using functional magnetic resonance imaging (fMRI) has grown very rapidly since the first description of the technique in the early 1990s. Most published studies have utilized data analysis methods based on voxel-wise application of general linear models (GLM). On the other hand, temporal clustering analysis (TCA) focuses on the identification of relationships between cortical areas by measuring temporal common properties. In its most general form, TCA is sensitive to the low signal-to-noise ratio of BOLD and is dependent on subjective choices of filtering parameters. In this paper, we introduce a method for wavelet-based clustering of time-series data and show that it may be useful in data sets with low signal-to-noise ratios, allowing the automatic selection of the optimum number of clusters. We also provide examples of the technique applied to simulated and real fMRI datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.