The Bayesian information criterion (BIC) is one of the most widely known and pervasively used tools in statistical model selection. Its popularity is derived from its computational simplicity and effective performance in many modeling frameworks, including Bayesian applications where prior distributions may be elusive. The criterion was derived by Schwarz (Ann Stat 1978, 6:461–464) to serve as an asymptotic approximation to a transformation of the Bayesian posterior probability of a candidate model. This article reviews the conceptual and theoretical foundations for BIC, and also discusses its properties and applications. WIREs Comput Stat 2012, 4:199–203. doi: 10.1002/wics.199
This article is categorized under:
Statistical and Graphical Methods of Data Analysis > Bayesian Methods and Theory
Statistical and Graphical Methods of Data Analysis > Information Theoretic Methods
Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods
The Akaike information criterion (AIC) is one of the most ubiquitous tools in statistical modeling. The first model selection criterion to gain widespread acceptance, AIC was introduced in 1973 by Hirotugu Akaike as an extension to the maximum likelihood principle. Maximum likelihood is conventionally applied to estimate the parameters of a model once the structure and dimension of the model have been formulated. Akaike's seminal idea was to combine into a single procedure the process of estimation with structural and dimensional determination. This article reviews the conceptual and theoretical foundations for AIC, discusses its properties and its predictive interpretation, and provides a synopsis of important practical issues pertinent to its application. Comparisons and delineations are drawn between AIC and its primary competitor, the Bayesian information criterion (BIC). In addition, the article covers refinements of AIC for settings where the asymptotic conditions and model specification assumptions that underlie the justification of AIC may be violated.
This article is categorized under:
Software for Computational Statistics > Artificial Intelligence and Expert Systems
Statistical Models > Model Selection
Statistical and Graphical Methods of Data Analysis > Modeling Methods and Algorithms
Statistical and Graphical Methods of Data Analysis > Information Theoretic Methods
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.