The Gaussian Graphical Model (GGM) has recently grown popular in psychological research, with a large body of estimation methods being proposed and discussed across various fields of study, and several algorithms being identified and recommend as applicable to psychological datasets. Such high-dimensional model estimation, however, is not trivial, and algorithms tend to perform differently in different settings. In addition, psychological research poses unique challenges, including placing a strong focus on weak edges (e.g., bridge edges), handling data measured on ordered scales, and relatively limited sample sizes. As a result, there is currently no consensus regarding which estimation procedure performs best in which setting. In this large-scale simulation study, we aimed to overcome this gap in the literature by comparing the performance of several estimation algorithms suitable for gaussian and skewed ordered categorical data across a multitude of settings, as to arrive at concrete guidelines from applied researchers. In total, we investigated 60 different metrics across 564,000 simulated datasets. We summarized our findings through a platform that allows for manually exploring simulation results. Overall, we found that an exchange between discovery (e.g., sensitivity, edge weight correlation) and caution (e.g., specificity, precision) should always be expected and achieving both¬—which is a requirement for perfect replicability—is difficult. Further, we identified that the estimation method is best chosen in light of each research question and highlighted, alongside desirable asymptotic properties and low sample size discovery, results according to most common research questions in the field.