2019
DOI: 10.1214/18-aos1783
|View full text |Cite
|
Sign up to set email alerts
|

Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors

Abstract: In this paper, we study the high-dimensional sparse directed acyclic graph (DAG) models under the empirical sparse Cholesky prior. Among our results, strong model selection consistency or graph selection consistency is obtained under more general conditions than those in the existing literature. Compared to Cao, Khare and Ghosh (2017), the required conditions are weakened in terms of the dimensionality, sparsity and lower bound of the nonzero elements in the Cholesky factor. Furthermore, our result does not re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
54
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 31 publications
(54 citation statements)
references
References 49 publications
0
54
0
Order By: Relevance
“…By the assumptions in the previous work, it follows that the DAGs in the analysis do not include the models where the Cholesky factor has one or more non-zero elements for each column, since p/ 1 8 d n log p 1+k 2+k → ∞, as n → ∞, while in our result, each row can have at most R n ∼ n log n number of non-zero entries as indicated in Assumption 4. Hence, our strong model selection consistency results is more general than Ghosh, 2019, Lee, Lee, andLin, 2018] in the sense that the consistency holds for a larger class of DAGs.…”
Section: Strong Model Selection Consistencymentioning
confidence: 52%
See 4 more Smart Citations
“…By the assumptions in the previous work, it follows that the DAGs in the analysis do not include the models where the Cholesky factor has one or more non-zero elements for each column, since p/ 1 8 d n log p 1+k 2+k → ∞, as n → ∞, while in our result, each row can have at most R n ∼ n log n number of non-zero entries as indicated in Assumption 4. Hence, our strong model selection consistency results is more general than Ghosh, 2019, Lee, Lee, andLin, 2018] in the sense that the consistency holds for a larger class of DAGs.…”
Section: Strong Model Selection Consistencymentioning
confidence: 52%
“…Remark 2. It is worthwhile to point out that our assumptions on the true Cholesky factor are weaker compared to [Lee, Lee, and Lin, 2018]. In particular, Lee, Lee, and Lin [2018] introduce conditions A (2) and A(4) on the sparsity pattern of the true Cholesky factor such that the number of non-zero elements in each row as well as each column of L n 0 to be smaller than some constant s 0 , while in this paper, we are allowing the maximum number of non-zero entries in any column of L n 0 to grow at a smaller rate than n log pn (Assumption 2).…”
Section: Assumptions On the True Parameter Classmentioning
confidence: 99%
See 3 more Smart Citations