2015
DOI: 10.14490/jjss.45.21
|View full text |Cite
|
Sign up to set email alerts
|

Conditions for Consistency of a Log-Likelihood-Based Information Criterion in Normal Multivariate Linear Regression Models under the Violation of the Normality Assumption

Abstract: In this paper, we clarify conditions for consistency of a log-likelihood-based information criterion in multivariate linear regression models with a normality assumption. Although normality is assumed for the distribution of the candidate model, we frame the situation so that the assumption of normality may be violated. The conditions for consistency are derived from two types of asymptotic theory; one is based on a large-sample asymptotic framework in which only the sample size approaches ∞, and the other is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
8
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…The reason why this inversion arises may be that a difference in risks between two over-specified models (i.e., models including the true model) diverges with n and p n tending to infinity, and thus penalty terms of C p and AIC are moderate but that of BIC is too strong. In addition to these studies, model selection criteria in highdimensional data contexts and their consistency properties have been vigorously studied in various models and situations (e.g., Katayama and Imori, 2014;Imori and von Rosen, 2015;Yanagihara, 2015;Fujikoshi and Sakurai, 2016;Bai, Choi and Fujikoshi, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…The reason why this inversion arises may be that a difference in risks between two over-specified models (i.e., models including the true model) diverges with n and p n tending to infinity, and thus penalty terms of C p and AIC are moderate but that of BIC is too strong. In addition to these studies, model selection criteria in highdimensional data contexts and their consistency properties have been vigorously studied in various models and situations (e.g., Katayama and Imori, 2014;Imori and von Rosen, 2015;Yanagihara, 2015;Fujikoshi and Sakurai, 2016;Bai, Choi and Fujikoshi, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…One simple way to construct such a variable selection method is to adjust the penalty term of the AIC. Indeed, Yanagihara (2019) proposed a consistent variable selection criterion based on this idea. However, this approach does not fit the concept of the AIC; namely, the adjusted penalty term is not an exact or approximately unbiased estimator of the bias which arises when the true density is approximated by a predictive density.…”
Section: Introductionmentioning
confidence: 99%
“…To deal with both of the LS and HD asymptotic frameworks at the same time, we consider the following asymptotic framework: n,0.3em0.3emcn,p=pnc0false[0,1false).$$ n\to \infty, \kern0.60em {c}_{n,p}=\frac{p}{n}\to {c}_0\in \left[0,1\right). $$ This framework has been investigated in many studies, such as Yanagihara et al (2017) and Yanagihara (2019). We write the limit under () as limarraynarraycn,pc0,$$ \underset{\begin{array}{c}n\to \infty \\ {}{c}_{n,p}\to {c}_0\end{array}}{\lim }, $$ and we use the notation pfalse(nfalse)$$ p(n) $$ for p$$ p $$ when we need to emphasize that it may vary with n.$$ n. $$ Following the existing literature, we assume that pfalse(nfalse)$$ p(n) $$ is nondecreasing, and therefore, pfalse(nfalse)$$ p(n) $$ is either bounded or divergent.…”
Section: Introductionmentioning
confidence: 99%
“…0 to the moderate-high-dimensional asymptotic framework. Relaxing the normality assumption, Yanagihara (2015) dealt with conditions for consistency of the GIC under the moderate-highdimensional asymptotic framework. Under the normality assumption, Yanagihara (2016) obtained conditions for consistency of the GC p criterion under a hybrid-moderate-high-dimensional asymptotic framework such that n goes to y and p may go to y but p=n converges to some positive constant included in ½0; 1Þ.…”
Section: Introductionmentioning
confidence: 99%