Data describing networks such as social networks, citation graphs, hypertext systems, and communication networks is becoming increasingly common and important for analysis. Research on link-based classification studies methods to leverage connections in such networks to improve accuracy. Recently, a number of such methods have been proposed that first construct a set of latent features or links that summarize the network, then use this information for inference. Some work has claimed that such latent methods improve accuracy, but has not compared against the best non-latent methods. In response, this article provides the first substantial comparison between these two groups. Using six real datasets, a range of synthetic data, and multiple underlying models, we show that (non-latent) collective inference methods usually perform best, but that the dataset's label sparsity, attribute predictiveness, and link density can dramatically affect the performance trends. Inspired by these findings, we introduce three novel algorithms that combine a latent construction with a latent or non-latent method, and demonstrate that they can sometimes substantially increase accuracy.