Graph neural networks (GNNs) are powerful models capable of learning from graph-structured data and performing various tasks. GNNs are susceptible to poisoning attacks, in which sophisticated attackers inject malicious nodes or edges into the graph topology to degrade their performance. The existing defense mechanisms, such as adversarial training, are ineffective in improving the robustness of GNN models. Fake nodes can be utilized creatively to deceive traditional GNN neural functions. In this paper, we propose a robust GNN model empowered by a reliable aggregation function based on the OOD detection concept and a novel method. A key idea of RGRO is to train a model that maps the nodes to a latent space in which we can measure the distance between the nodes and their distribution. The Mahalanobis distance is proposed as a superior alternative to cosine distance in order to exploit the homophily rule better and to consider the contextual information of the nodes. The Mahalanobis distance, which considers the covariance of the data and is scale-invariant, allows for a more effective application of the homophily rule. Mahalanobis distance is explored in this study to enhance the robustness and accuracy of outlier detection in graph data, thus contributing to a better understanding of homophily. The RGRO model can improve accuracy by removing poisoned data without knowing any types of poisoning attacks or GNN algorithms. RGRO is evaluated with four typical defense strategies based on two types of poisoning attacks implemented on different realistic datasets. Based on the results, RGRO can detect poisoned data against attacks effectively and efficiently. In the best scenario, RGRO improves the accuracy of the GNN model by 0.86.