The ensemble technique has been widely used in numerical weather prediction and extended-range forecasting. Current approaches to evaluate the predictability using the ensemble technique can be divided into two major groups. One is dynamical, including generating Lyapunov vectors, bred vectors, and singular vectors, sampling the fastest error-growing directions of the phase space, and examining the dependence of prediction efficiency on ensemble size. The other is statistical, including distributional analysis and quantifying prediction utility by the Shannon entropy and the relative entropy. Currently, with simple models, one could choose as many ensembles as possible, with each ensemble containing a large number of members. When the forecast models become increasingly complicated, however, one would only be able to afford a small number of ensembles, each with limited number of members, thus sacrificing estimation accuracy of the forecast errors.To uncover connections between different information theoretic approaches and between dynamical and statistical approaches, we propose an ( , τ )-entropy and scale-dependent Lyapunov exponent -based general theoretical framework to quantify information loss in ensemble forecasting. More importantly, to tremendously expedite computations, reduce data storage, and improve forecasting accuracy, we propose a technique for constructing a large number of "pseudo" ensembles from one single solution or scalar dataset. This pseudo-ensemble technique appears to be applicable under rather general conditions, one important situation being that observational data are available but the exact dynamical model is unknown.