The cost of hyperspectral image (HSI) classification primarily stems from the annotation of image pixels. In real-world classification scenarios, the measurement and annotation process is both time-consuming and labor-intensive. Therefore, reducing the number of labeled pixels while maintaining classification accuracy is a key research focus in HSI classification. This paper introduces a multi-strategy triple network classifier (MSTNC) to address the issue of limited labeled data in HSI classification by improving learning strategies. First, we use the contrast learning strategy to design a lightweight triple network classifier (TNC) with low sample dependence. Due to the construction of triple sample pairs, the number of labeled samples can be increased, which is beneficial for extracting intra-class and inter-class features of pixels. Second, an active learning strategy is used to label the most valuable pixels, improving the quality of the labeled data. To address the difficulty of sampling effectively under extremely limited labeling budgets, we propose a new feature-mixed active learning (FMAL) method to query valuable samples. Fine-tuning is then used to help the MSTNC learn a more comprehensive feature distribution, reducing the model’s dependence on accuracy when querying samples. Therefore, the sample quality is improved. Finally, we propose an innovative dual-threshold pseudo-active learning (DSPAL) strategy, filtering out pseudo-label samples with both high confidence and uncertainty. Extending the training set without increasing the labeling cost further improves the classification accuracy of the model. Extensive experiments are conducted on three benchmark HSI datasets. Across various labeling ratios, the MSTNC outperforms several state-of-the-art methods. In particular, under extreme small-sample conditions (five samples per class), the overall accuracy reaches 82.97% (IP), 87.94% (PU), and 86.57% (WHU).