Visual re-ranking has received considerable attention in recent years. It aims to enhance the performance of text-based image retrieval by boosting the rank of relevant images using visual information. Hypergraph has been widely used for relevance estimation, where textual results are taken as vertices and the re-ranking problem is formulated as a transductive learning on the hypergraph. The potential of the hypergraph learning is essentially determined by the hypergraph construction scheme. To this end, in this paper, we introduce a novel data representation technique named adaptive collaborative representation for hypergraph learning. Compared to the conventional collaborative representation, we consider the data locality to adaptively select relevant and close samples for a test sample and discard irrelevant and faraway ones. Moreover, at the feature level, we impose a weight matrix on the representation errors to adaptively highlight the important features and reduce the effect of redundant/noisy ones. Finally, we also add a nonnegativity constraint on the representation coefficients to enhance the hypergraph interpretability. These attractive properties allow constructing a more informative and quality hypergraph, thereby achieving better retrieval performance than other hypergraph models. Extensive experiments on the public MediaEval benchmarks demonstrate that our re-ranking method achieves consistently superior results, compared to state-of-the-art methods.