The critical issue facing hyperspectral image (HSI) classification is the imbalance between dimensionality and the number of available training samples. This study attempted to solve the issue by proposing an integrating method using minimum noise fractions (MNF) and Hilbert–Huang transform (HHT) transformations into artificial neural networks (ANNs) for HSI classification tasks. MNF and HHT function as a feature extractor and image decomposer, respectively, to minimize influences of noises and dimensionality and to maximize training sample efficiency. Experimental results using two benchmark datasets, Indian Pine (IP) and Pavia University (PaviaU) hyperspectral images, are presented. With the intention of optimizing the number of essential neurons and training samples in the ANN, 1 to 1000 neurons and four proportions of training sample were tested, and the associated classification accuracies were evaluated. For the IP dataset, the results showed a remarkable classification accuracy of 99.81% with a 30% training sample from the MNF1–14+HHT-transformed image set using 500 neurons. Additionally, a high accuracy of 97.62% using only a 5% training sample was achieved for the MNF1–14+HHT-transformed images. For the PaviaU dataset, the highest classification accuracy was 98.70% with a 30% training sample from the MNF1–14+HHT-transformed image using 800 neurons. In general, the accuracy increased as the neurons increased, and as the training samples increased. However, the accuracy improvement curve became relatively flat when more than 200 neurons were used, which revealed that using more discriminative information from transformed images can reduce the number of neurons needed to adequately describe the data as well as reducing the complexity of the ANN model. Overall, the proposed method opens new avenues in the use of MNF and HHT transformations for HSI classification with outstanding accuracy performance using an ANN.