: In order to find the convergence rate of finite sample discrete entropies of a white Gaussian noise(WGN), Brown entropy algorithm is numerically tested.With the increase of sample size, the curves of these finite sample discrete entropies are asymptotically close to their theoretical values.The confidence intervals of the sample Brown entropy are narrower than those of the sample discrete entropy calculated from its differential entropy, which is valid only in the case of a small sample size of WGN. The differences between sample Brown entropies and their theoretical values are fitted by two rational functions exactly, and the revised Brown entropies are more efficient. The application to the prediction of wind speed indicates that the variances of resampled time series increase almost exponentially with the increase of resampling period. Entropy in information theory is a powerful conception to describing the indeterminacy of a non-stationary time series [1,2] , and it has been used in a wide range of fields [3][4][5][6][7][8][9][10][11][12] . Its estimators from time series have also been studied extensively [13] . The differential entropy of a "true" white Gaussian noise(WGN), which follows a normal distribution and has an infinite sample size, is determined exactly by its variance of normal distribution [2] . But in practice, WGNs have finite sample sizes, often contain outliers and deviate from the normal distributions [14] . The exact and robust estimators of entropy and its variance from observed data are needed all the time [15] . In the prediction of non-stationary time series containing a WGN, the WGN forms an inaccessible limit of prediction accuracy. The identification of correct distribution function of the WGN from observed data is still an open problem [15] . In general, if the level of confidence keeps the same, the confidence intervals of a statistic get broader with the diminishing sample size. The smaller the sample size, the greater the differences between the estimated values of entropy or variance of a WGN and their true values [16] .Many studies show that the minimum error entropy (MEE)criterion can outperform the traditional mean square error criterion in supervised machine learning, especially in nonlinear and non-Gaussian situations [3] . Here the mean square error has the same calculating formula as the variance. Although Chen and Principe [3] explained this phenomenon by the smooth characteristic of entropy under the pollution due to an undesired random variable, the reason why the entropy is more reliable than the mean square error or the variance is still unknown. The content of this paper is organized as follows: for a pseudo-WGN with outliers and deviation from the normal distribution, the divergent intervals of its sample discrete entropies and its sample variances are calculated numerically; the convergence rates of sample discrete entropies which are asymptotical to their theoretical discrete entropies are fitted by two rational functions; the lost rate of signal in a wi...