Graph Neural Networks (GNNs) have shown to be vulnerable against adversarial examples in many works, which encourages researchers to drop substantial attention to its robustness and security. However, so far, the reasons for the success of adversarial attacks and the intrinsic vulnerability of GNNs still remain unclear. The work presented here outlines an empirical study to further investigate these observations and provide several insights. Experimental results, analyzed across a variety of benchmark GNNs on two datasets, indicate that GNNs are indeed sensitive to adversarial attacks due to its non-robust message functions. To exploit the adversarial patterns, we introduce two measurements to depict the randomness of node labels and features for a graph, noticing that the neighborhood entropy significantly increased under adversarial attacks. Furthermore, we find out that the adversarially manipulated graphs typically tend to be much denser and high-rank, where most of the dissimilar nodes are intentionally linked. And the stronger the attacks are, such as Metattack, the patterns are more apparent. To sum up, our findings shed light on understanding adversarial attacks on graph data and lead potential advancement in enhancing the robustness of GNNs.