Graph Convolutional Networks (GCNs) have gained widespread adoption in modeling human skeleton sequences for two-person interaction recognition. Most GCN-based models achieve state-of-the-art results by leveraging either intra-body or inter-body connections. However, using only intra-body relations may ignore important interactive features between two individuals, whereas relying on inter-body relations may weaken the specific motion dynamics of each skeleton. To address these shortcomings, we propose a Distinct Motion-Preserving GCN (DMP-GCN) that utilizes intra-body and inter-body graphs to extract interactive features from two human bodies while preserving the distinct motion characteristics of each skeleton. Specifically, two motion-specific streams are adopted to capture specific motion features of each human skeleton and an interactive stream is applied to model the interactive dynamics of two bodies. In addition, we introduce a new graph labeling strategy called Distance Variation Labeling which is a datadriven approach for defining the edge strength in the skeleton graph. Extensive experiments show that our proposed approaches outperform state-of-the-art methods on two large-scale human interaction datasets, NTU RGB+D (mutual) and NTU RGB+D 120 (mutual).