2019
DOI: 10.1016/j.csda.2019.05.007
|View full text |Cite
|
Sign up to set email alerts
|

Regularized joint estimation of related vector autoregressive models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 34 publications
0
9
0
Order By: Relevance
“…Thus, ignoring the information of other groups may lead to suboptimal solutions (Danaher et al, 2014;Lee and Liu, 2015). Moreover, joint estimation of graphical models has been applied successfully in a number of problems, including metabolite experiments (Tan et al, 2017), cancer networks (Mohan et al, 2012;Peterson et al, 2015;Lee and Liu, 2015;Saegusa and Shojaie, 2016;Hao et al, 2018), biomedical data (Yajima et al, 2014;Kling et al, 2015;Pierson et al, 2015), gene expression (Chun et al, 2015;), text processing (Guo et al, 2011), climate data (Ma and Michailidis, 2016), and fMRI (Qiu et al, 2016;Colclough et al, 2018;Skripnikov and Michailidis, 2019;Lukemire et al, 2020). In all of these problems, data are heterogeneous, but the graphs share similarities.…”
Section: Joint Gaussian Graphical Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, ignoring the information of other groups may lead to suboptimal solutions (Danaher et al, 2014;Lee and Liu, 2015). Moreover, joint estimation of graphical models has been applied successfully in a number of problems, including metabolite experiments (Tan et al, 2017), cancer networks (Mohan et al, 2012;Peterson et al, 2015;Lee and Liu, 2015;Saegusa and Shojaie, 2016;Hao et al, 2018), biomedical data (Yajima et al, 2014;Kling et al, 2015;Pierson et al, 2015), gene expression (Chun et al, 2015;), text processing (Guo et al, 2011), climate data (Ma and Michailidis, 2016), and fMRI (Qiu et al, 2016;Colclough et al, 2018;Skripnikov and Michailidis, 2019;Lukemire et al, 2020). In all of these problems, data are heterogeneous, but the graphs share similarities.…”
Section: Joint Gaussian Graphical Modelsmentioning
confidence: 99%
“…A variety of applications have illustrated the value of graphical models for analyzing scientific phenomena (Felsenstein, 1981;Schäfer and Strimmer, 2005;Friedman et al, 2000;Chan et al, 2017;Dondelinger et al, 2013). Specifically, graphical models have proven useful for elucidating the mechanisms of brain function (Foti and Fox, 2019;Manning et al, 2018;Schwab et al, 2018;Greenewald et al, 2017;Colclough et al, 2018;Qiu et al, 2016;Skripnikov and Michailidis, 2019). This manuscript outlines joint graphical models, an extension to standard graphical models that are useful for jointly analyzing data from multiple sources, e.g., neurological data measured at multiple timescales, or joint neurological, genetic and phenotypic data.…”
Section: Introductionmentioning
confidence: 99%
“…In the so-called high-dimensional setting, where the model dimension becomes comparable to or even exceeds the sample size, regularization schemes are employed to guard against over-fitting. These schemes include Tikonov regularization [56], [57], ℓ 1 -regularization or the LASSO [30]- [33], smoothly clipped absolute deviation [58], [59], Elastic-Net [60], and their variants, and have particularly proven useful in MVAR estimation [21]- [24], [28], [29]. Among these techniques, the LASSO has been widely used and studied in the high-dimensional sparse MVAR setting, under fairly general assumptions [22]- [24].…”
Section: B Lasso-based Causal Inference In the High-dimensional Settingmentioning
confidence: 99%
“…• Skrip19b: a two-stage approach [SM19b] that first estimated the parameters of the common network using a group lasso (like our CGN) and then estimated the individual components based on the resulting common network. This approach does not guarantee a global optimal solution as the parameters were estimated in sequential steps, not being optimized in batch.…”
Section: Common and Differential Gcmentioning
confidence: 99%