Person reidentification (re-id) aims to recognize a specific pedestrian from uncrossed surveillance camera views. Most re-id methods perform the retrieval task by comparing the similarity of pedestrian features extracted from deep learning models. Therefore, learning a discriminative feature is critical for person reidentification. Many works supervise the model learning with one or more loss functions to obtain the discriminability of features. Softmax loss is one of the widely used loss functions in re-id. However, traditional softmax loss inherently focuses on the feature separability and fails to consider the compactness of within-class features. To further improve the accuracy of re-id, many efforts are conducted to shrink within-class discrepancy as well as between-class similarity. In this paper, we propose a circle-based ratio loss for person re-identification. Concretely, we normalize the learned features and classification weights to map these vectors in the hypersphere. Then we take the ratio of the maximal intraclass distance and the minimal interclass distance as an objective loss, so the between-class separability and within-class compactness can be optimized simultaneously during the training stage. Finally, with the joint training of an improved softmax loss and the ratio loss, the deep model could mine discriminative pedestrian information and learn robust features for the re-id task. Comprehensive experiments on three re-id benchmark datasets are carried out to illustrate the effectiveness of the proposed method. Specially, 83.12% mAP on Market-1501, 71.70% mAP on DukeMTMC-reID, and 66.26%/63.24% mAP on CUHK03 labeled/detected are achieved, respectively.