Certain correlation exists between various expressions of the human face, yet few works have explored it. In this paper, we propose to recognize facial expressions by extracting independent expressive representation through a novel learning procedure, called Inter-class Relational Learning (IcRL). First, an elaborately designed CNN architecture is utilized for simultaneously extracting features from two different expression images. Then, two extracted features are integrated with a random ratio so that a hybrid feature can be attained. An attention module is proposed to assign a weight for each pixel of the hybrid feature. At last, we feed the weighted hybrid feature to the subsequent classification module. The goal of the whole network training is to output the correct ratio of each expression. Different from previous work, our novel method is capable of learning the mutual relations between different classes of expression and enlarging Fisher's criterion between any two classes, i.e., the ratio of inter-class distance to intra-class distance, by accurately discriminating the hybrid feature. In this way, we will be able to enhance the discriminative power of learned features of each expression and, therefore, can improve the classification performance. The IcRL method has been evaluated on five public expression databases: CK+, JAFFE, TFEID, BAUM-2i, and FER2013. The experimental results demonstrate the superiority of the proposed method over many state-of-the-art approaches.