In online social networks, users can vote on different trust levels for each other to indicate how much they trust their friends. Researchers have improved their ability to predict social trust relationships through a variety of methods, one of which is the graph neural network (GNN) method, but they have also brought the vulnerability of the GNN method into the social trust network model. We propose a data-poisoning attack method for GNN-based social trust models based on the characteristics of social trust networks. We used a two-sample test for power-law distributions of discrete data to avoid changes in the dataset being detected and used an enhanced surrogate model to generate poisoned samples. We further tested the effectiveness of our approach on three real-world datasets and compared it with two other methods. The experimental results using three datasets show that our method can effectively avoid detection. We also used three metrics to illustrate the effectiveness of our attack, and the experimental results show that our attack stayed ahead of the other two methods in all three datasets. In terms of one of our metrics, our attack method decreased the accuracies of the attacked models by 12.6%, 22.8%, and 13.8%.