Studying complex relations in multivariate datasets is a common task in psychological science. Recently, the Gaussian graphical model has emerged as an increasingly popular model for characterizing the conditional dependence structure of random variables. Although the graphical lasso ($\ell_1$-regularization) is the most well-known estimator across the sciences, it has several drawbacks that make it less than ideal for model selection. There are now alternative forms of regularization that were developed specifically to overcome issues inherent to the $\ell_1$-penalty.To date, this information has not been synthesized. This paper provides a comprehensive survey of nonconvex regularization that spans from the smoothly clipped absolute deviation penalty to continuous approximations of the $\ell_0$-penalty (i.e., best subset) for directly estimating the inverse covariance matrix. A common thread shared by these penalties is that they all enjoy the oracle properties, that is, they perform as though the \emph{true} generating model were known in advance. To ensure their theoretical properties are general, I conducted extensive numerical experiments that indicated superior and more than competitive performance when compared to glasso and non-regularized model selection, respectively, all the while being computationally feasible for many variables. In addition, the important topics of tuning parameter selection and statistical inference in regularized models are reviewed.The penalties are employed to estimate the dependence structure of post-traumatic stress disorder symptoms. The discussion includes several ideas for future research, including a plethora of information to facilitate their study. I have implemented the methods in the
Gaussian graphical models (GGM) allow for learning conditional independence structures that are encoded by partial correlations. Whereas there are several \proglang{R} packages for classical (i.e., frequentist) methods, there are only two that implement a Bayesian approach. These are exclusively focused on identifying the graphical structure; that is, detecting non-zero effects. The \proglang{R} package \pkg{BGGM} not only fills this gap, but it also includes novel Bayesian methodology for extending inference beyond identifying non-zero relations. \pkg{BGGM} is built around two Bayesian approaches for inference--estimation and hypothesis testing. The former focuses on the posterior distribution and includes extensions to assess predictability, as well as methodology to compare partial correlations. The latter includes methods for Bayesian hypothesis testing, in both exploratory and confirmatory contexts, with the novel matrix-$F$ prior distribution. This allows for testing order and equality constrained hypotheses, as well as a combination of both with the Bayes factor. Further, there are two approaches for comparing any number of GGMs with either the posterior predictive distribution or Bayesian hypothesis testing. This work describes the software implementation of these methods. We end by discussing future directions for \pkg{BGGM}.
The topic of replicability has recently captivated the emerging field of networkpsychometrics. Although methodological practice (e.g., p-hacking) has been identified as a root cause of unreliable research findings in psychological science, the statistical model itself has come under attack in the partial correlation network literature. In a motivating example, I first describe how sampling variability inherent to partial correlations can merely give the appearance of unreliability. For example, when going from zero-order to partial correlations there is necessarily more sampling variability that translates into reduced statistical power. I then introduce novel methodology for deriving expected network replicability. This analytic solution can be used with Pearson, Spearman, and polychoric partial correlations. I employ the method to highlight an additional source of sampling variability, that is, when going from continuous to ordinal data with few categories: in networks with 20 variables (N = 500) replicability can exceed 50% for continuous data but this decreases to less than 25% for ordinal data! Additionally, I propose using the smallest edge size of interest to achieve a desired level of replicability in network models. I end with recommendations that include the importance of network psychometrics repositioning itself with gold-standard approaches for assessing replication (e.g., by using methods with defined error rates). I have implemented the method for computing expected network replicability in the R package GGMnonreg.
Partial correlation networks have emerged as an increasingly popular model for studyingmental disorders. Although conditional independence is a fundamental concept in networkanalysis, which corresponds to the null hypothesis, the focus is typically to detect and thenvisualize non-zero partial correlations (i.e., the “edges” connecting nodes) in a graph. As aresult, it may be tempting to interpret a missing edge as providing evidence for itsabsence—analogously to misinterpreting a non-significant p-value. In this work, we firstestablish that a missing edge is incorrectly interpreted as providing evidence for conditionalindependence, with examples spanning from substantive applications to tutorials thatinstruct researchers to misinterpret their networks. We then go beyond misguided“inferences” and establish that null associations are interesting in their own right. In thefollowing section, three illustrative examples are provided that employ Bayesian hypothesistesting to formally evaluate the null hypothesis, including a reanalysis of twopsychopathology networks, confirmatory testing to determine whether a particularpost-traumatic stress disorder symptom is disconnected from the network, and attenuationdue to correcting for covariates. Our results shed light upon conditionally independentsymptoms and demonstrate that a missing edge does not necessarily correspond toevidence for the null hypothesis. These findings are accompanied with a simulation studythat provides insights into the sample size needed to accurately detect null relations. Weconclude with implications for both clinical to theoretical inquiries.
We shed much needed light upon a critical assumption that is oft-overlooked---or not considered at all---in random-effects meta-analysis.Namely, that between-study variance is constant across \emph{all} studies which implies they are from the \emph{same} population. Yet it is not hard to imagine a situation where there are several and not merely one population of studies, perhaps differing in their between-study variance (i.e., heteroskedasticity). The objective is to then make inference, given that there are variations in heterogeneity. There is an immediate problem, however, in that modeling heterogeneous variance components is not straightforward to do in a general way. To this end, we propose novel methodology, termed Bayesian location-scale meta-analysis, that can accommodate moderators for both the overall effect (location) and the between-study variance (scale). After introducing the model, we then extend heterogeneity statistics, prediction intervals, and hierarchical shrinkage, all of which customarily assume constant heterogeneity, to include variations therein. With these new tools in hand, we go to work demonstrating that quite literally \emph{everything} changes when between-study variance is not constant across studies. The changes were not small and easily passed the interocular trauma test---the importance hits right between the eyes. Such examples include (but are not limited to) inference on the overall effect, a compromised predictive distribution, and improper shrinkage of the study-specific effects. Further, we provide an illustrative example where heterogeneity was not considered a mere nuisance to show that modeling variance for its own sake can provide unique inferences, in this case into discrimination across nine countries. The discussion includes several ideas for future research. We have implemented the proposed methodology in the {\tt R} package \textbf{blsmeta}.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.