Abstract-In this paper, a new technique is proposed for field effect transistor (FET) small-signal modeling using neural networks. This technique is based on the combination of the Mel frequency cepstral coefficients (MFCCs) and discrete sine transform (DST) of the inputs to the neural networks. The input data sets to traditional neural systems for FET small-signal modeling are the scattering parameters and corresponding frequencies in a certain band, and the outputs are the circuit elements. In the proposed approach, these data sets are considered as forming random signals. The MFCCs of the random signals are used to generate a small number of features characterizing the signals. In addition, other MFCCs vectors are calculated from the DST of the random signals and appended to the MFCCs vectors calculated from the signals. The new feature vectors are used to train the neural networks. The objective of using these new vectors is to characterize the random input sequences with much more features to be robust against measurement errors. There are two benefits for this approach: a reduction in the number of neural networks inputs and hence a faster convergence of the neural training algorithm and robustness against measurement errors in the testing phase. Experimental results show that the proposed technique is less sensitive to measurement errors than using the actual measured scattering parameters.