Deep convolutional neural networks (DCNNs) have enjoyed much success in many applications, such as computer vision, automated medical diagnosis, autonomous systems, etc. Another application of DCNNs is for game strategies, where the deep neural network architecture can be used to directly represent and learn strategies from expert players on different sides. Many game states can be expressed not only as a matrix data structure suitable for DCNN training but also as a graph data structure. Most of the available DCNN methods ignore the territory characteristics of both sides’ positions based on the game rules. Therefore, in this paper, we propose a hybrid approach to the graph neural network to extract the features of the model of game-playing strategies and fuse it into a DCNN. As a graph learning model, graph convolutional networks (GCNs) provide a scheme by which to extract the features in a graph structure, which can better extract the features in the relationship between the game-playing strategies. We validate the work and design a hybrid network to integrate GCNs and DCNNs in the game of Go and show that on the KGS Go dataset, the performance of the hybrid model outperforms the traditional DCNN model. The hybrid model demonstrates a good performance in extracting the game strategy of Go.