Zero-shot learning (ZSL) can identify new classes by transferring semantic knowledge from seen classes to unseen classes according to the relationships between the visual and semantic features of learning. However, due to the difference of manifold distribution between visual feature space and semantic feature space, some algorithms rely too much on cross-modal mapping between semantic feature space and visual feature space, and this process lacks effective constraints. When mapping from high-dimensional visual feature space to semantic space, highly overlapping semantic attributes are generated, which leads to the problem of pivot point. In addition, there is a serious problem of domain migration in generalized zero-shot learning. In order to solve these problems, we propose the Zero-shot learning relationship method (SPR-ZSL) with semantic pivot regularization. In SPR-ZSL, semantic attributes of low-dimensional features are first mapped to visual space to alleviate the hubness problem. Then semantic pivot regularization loss is designed to guide the embedding network to learn semantic attributes to make the original semantic attributes more discriminative. Finally, the relation network is used to learn the similarity between semantic attributes and image features adaptively to complete the image classification task. The effectiveness of the proposed method is validated on five datasets for both traditional Zero-shot learning and generalized Zero-shot learning tasks. The proposed method is compared with other state-of-the-art methods, and the effectiveness is demonstrated through comprehensive analysis of the experimental results.