In their recent paper, Forbes et al. (2019; FWMK) evaluate the replicability of network models in two studies. They identify considerable replicability issues, concluding that "current 'stateof-the-art' methods in the psychopathology network literature [ … ] are not well-suited to analyzing the structure of the relationships between individual symptoms". Such strong claims require strong evidence, which the authors do not provide. FWMK identify low replicability by analyzing point estimates of networks; contrast low replicability with results of two statistical tests that indicate higher replicability, and conclude that these tests are problematic. We make four points. First, statistical tests are superior to the visual inspection of point estimates, because tests take into account sampling variability. Second, FWMK misinterpret the statistical tests in several important ways. Third, FWMK did not follow established recommendations when estimating networks in their first study, underestimating replicability. Fourth, FWMK draw conclusions about methodology, which does not follow from investigations of data, and requires investigations of methodology. Overall, we show that the "poor replicability "observed by FWMK occurs due to sampling variability and use of suboptimal methods. We conclude by discussing important recent simulation work that guides researchers to use models appropriate for their data, such as nonregularized estimation routines.