In this study, a novel filter bank design is proposed for speech emotion recognition to replace current state-of-the-art MFCC (Mel Filter Cepstral Coefficients) and Mel filter banks. These novel filter banks are considered to have a great impact and pave the way for great developments and improvements over speech emotion recognition applications. Many filter banks have been proposed to model speech recognition applications but these models either contain too many banks or need some cumbersome mathematical operations to compute. MFCC requires the calculation of DCT (Discrete Cosine Transform), and it is also too difficult to interpret the MFCC coefficients. Mel filters are easy to interpret but they contain too many filters. The novel filter banks are faster and easier to compute. Moreover, they can be interpreted better compared to the MFCC and Mel filters. We apply these filter banks with NVIDIA’s CNN model and SVM-SMO classifier to compare them with MFCC and Mel filter banks. We also implement feature selection, data augmentation, and various techniques to combat problems of imbalanced datasets to show the effectiveness of proposed filter banks.