Classical statistics relies largely on parametric models. Typically, assumptions are made on the structural and the stochastic parts of the model and optimal procedures are derived under these assumptions. Standard examples are least squares estimators in linear models and their extensions, maximum-likelihood estimators and the corresponding likelihood-based tests, and generalized methods of moments (GMM) techniques in econometrics. Robust statistics deals with deviations from the stochastic assumptions and their dangers for classical estimators and tests and develops statistical procedures that are still reliable and reasonably efficient in the presence of such deviations. It can be viewed as a statistical theory dealing with approximate parametric models by providing a reasonable compromise between the rigidity of a strict parametric approach and the potential difficulties of interpretation of a fully nonparametric analysis. Many classical procedures are well known for not being robust. These procedures are optimal when the assumed model holds exactly, but they are biased and/or inefficient when small deviations from the model are present. The statistical results obtained from standard classical procedures on real data applications can therefore be misleading. In this paper we will give a brief introduction to robust statistics by reviewing some basic general concepts and tools and by showing how they can be used in data analysis to provide an alternative complementary analysis with additional useful information. In this study, we focus on robust statistical procedures based on M-estimators and tests because they provide a unified statistical framework that complements the classical theory. Robust procedures will be discussed for standard models, including linear models, general linear model, and multivariate analysis. Some recent developments in high-dimensional statistics will also be outlined.