Deep neural network has adequately revealed its superiority of solving various tasks in Natural Language Processing, especially for relation classification. However, unlike traditional feature-engineering methods that targetedly extract well-designed features for specific task, the diversity of input format for deep learning is limited; word sequence as input is the frequently used setting. Therefore, the input of neural network, to some extent, lacks pertinence. For relation classification task, it is not uncommon that, without specific entity pair, a sentence contains various relation types; therefore, entity pair indicates the distribution of the crucial information in input sentence for recognizing specific relation. Aiming at this characteristic, in this paper, several strategies are proposed to integrate entity pair information into the application of deep learning in relation classification task, in a way to provide definitive learning direction for neural network. Experimental results on the SemEval-2010 Task 8 dataset show that our method outperforms most of the state-of-the-art models, without external linguistic features.