While many statistical models and methods are now available for network analysis, resampling network data remains a challenging problem. Cross-validation is a useful general tool for model selection and parameter tuning, but is not directly applicable to networks since splitting network nodes into groups requires deleting edges and destroys some of the network structure. Here we propose a new network resampling strategy based on splitting node pairs rather than nodes applicable to crossvalidation for a wide range of network model selection tasks. We provide a theoretical justification for our method in a general setting and examples of how our method can be used in specific network model selection and parameter tuning tasks. Numerical results on simulated networks and on a citation network of statisticians show that this cross-validation approach works well for model selection.Statistical methods for analyzing networks have received a lot of attention because of their wide-ranging applications in areas such as sociology, physics, biology and medical sciences. Statistical network models provide a principled approach to extracting salient information about the network structure while filtering out the noise. Perhaps the simplest statistical network model is the famous Erdös-Renyi model [Erdös and Rényi, 1960], which served as a building block for a large body of more complex models, including the stochastic block model (SBM) [Holland et al., 1983], the degree-corrected stochastic block model (DCSBM) [Karrer and Newman, 2011], the mixed membership block model [Airoldi et al., 2008], and the latent space model [Hoff et al., 2002], to name a few.While there has been plenty of work on models for networks and algorithms for fitting them, inference for these models is commonly lacking, making it hard to take advantage of the full power of statistical modeling. Data splitting methods provide a general, simple, and relatively model-free inference framework and are commonly used in modern statistics, with cross-validation (CV) being the tool of choice for many model selection and parameter tuning tasks. For networks, both tasks are important -while there are plenty of models to choose from, it is a lot less clear how to select the best model for the data, and how to choose tuning parameters for the selected model, which is often necessary in order to fit it. In classical settings where the data points are assumed to be an i.i.d. sample, cross-validation works by splitting the data into multiple parts (folds), holding out one fold at a time as a test set, fitting the model on the remaining folds and computing its error on the held-out fold, and finally averaging the errors across all folds to obtain the cross-validation error. The model or the tuning parameter is then chosen to minimize this error. To explain the challenge of applying this idea to networks, we first introduce a probabilistic framework.Recall n is the number of nodes and A is the n × n adjacency matrix. Let D = diag(d 1 , d 2 , · · · , d n ) be the diagonal m...