Metaheuristic applications for information retrieval research are limited in spite of the importance of this problem domain. Ranking the retrieved documents based on their importance is a vital issue for the scientific and industrial communities. This paper proposes a novel variable neighborhood search (VNS) algorithm with adaptation based on an objective function for the learning to rank (LTR) problem. VNS is a global optimum metaheuristic algorithm that has been engaged to evolve the optimal solutions for heuristic problems based on exploring better neighbor solutions from the current one. The changes from the current to the next optimal solution are made during the perturbation stage to identify the global optimal solutions. The exploration procedure has been made through various mutation step sizes, whereas the exploitation process has been done by checking the quality of the evolved solutions using the fitness function. This research proposes a novel version of VNS based on four random probability distributions with gradient ascent. In addition to using the traditional random generator with gradient ascent for modifying the mutated genes of the neighborhood candidate solution in the following evolving iteration. This novel method in LTR is called gradient variable neighborhood (GVN). In the experiments, we utilized Microsoft Bing search (MSLR-WEB30K), Yahoo, and TREC Million Queries Competitions in 2008 and 2007 (LETOR 4) datasets. From the findings of the results, we can deduce that the GVN method outperformed recent studies on LTR methods.