Due to end-to-end optimization characteristics and fine generalization ability, convolutional neural networks have been widely applied to hyperspectral image (HSI) classification, playing an irreplaceable role. However, previous studies struggle with two major challenges: (1) HSI contains complex topographic features, the number of labeled samples in different categories is unbalanced, resulting in poor classification for categories with few labeled samples; (2) With the deepening of neural network models, it is difficult to extract more discriminative spectral-spatial features. To address the issues mentioned above, we propose a discriminative spectral-spatial-semantic feature network based on shuffle and frequency attention mechanisms for HSI classification. There are four main parts of our approach: spectral-spatial shuffle attention module (SSAM), context-aware high-level spectral-spatial feature extraction module (CHSFEM), spectral-spatial frequency attention module (SFAM), and cross-connected semantic feature extraction module (CSFEM). First, to fully excavate the category attribute information, SSAM based on a “Deconstruction-Reconstruction” structure is designed, solving the problem of poor classification performance caused by an unbalanced number of label samples. Considering that deep spectral-spatial features are difficult to extract, CHSFEM and SFAM are constructed. The former is based on the “Horizontal-Vertical” structure to capture context-aware high-level multiscale features. The latter introduces multiple frequency components to compress channels to obtain more multifarious features. Finally, towards suppressing noisy boundaries efficiently and capturing abundant semantic information, CSFEM is devised. Numerous experiments are implemented on four public datasets: the evaluation indexes of OA, AA and Kappa on four datasets all exceed 99%, demonstrating that our method can achieve satisfactory performance and is superior to other contrasting methods.