Recent advances in genetics, computer vision, and text mining are accompanied by analyzing data coming from a large domain, where the domain size is comparable or larger than the number of samples. In this dissertation, we apply the polynomial methods to several statistical questions with rich history and wide applications. The goal is to understand the fundamental limits of the problems in the large domain regime, and to design sample optimal and time efficient algorithms with provable guarantees. The first part investigates the problem of property estimation. Consider the problem of estimating the Shannon entropy of a distribution over k elements from n independent samples. We obtain the minimax mean-square error within universal multiplicative constant factors if n exceeds a constant factor of k/ log(k); otherwise there exists no consistent estimator. This refines the recent result on the minimal sample size for consistent entropy estimation. The apparatus of best polynomial approximation plays a key role in both the construction of optimal estimators and, via a duality argument, the minimax lower bound. We also consider the problem of estimating the support size of a discrete distribution whose minimum non-zero mass is at least 1 k. Under the independent sampling model, we show that the sample complexity, i.e., the minimal sample size to achieve an additive error of k with probability at least 0.1 is within universal constant factors of k log k log 2 1 , which improves the stateof-the-art result of k 2 log k. Similar characterization of the minimax risk is also obtained. Our procedure is a linear estimator based on the Chebyshev polynomial and its approximation-theoretic properties, which can be evaluated in O(n+log 2 k) time and attains the sample complexity within constant factors. The superiority of the proposed estimator in terms of accuracy, computational efficiency and scalability is demonstrated in a variety of synthetic and real datasets.