Speaker's audio is one of the unique identities of the speaker. Nowadays not only humans but machines can also identify humans by their audio. Machines identify different audio properties of the human voice and classify speaker from speaker's audio. Speaker recognition is still challenging with degraded human voice and limited dataset. Speaker can be identified effectively when feature extraction from voice is more accurate. Mel-Frequency Cepstral Coefficient (MFCC) is mostly used method for human voice feature extraction. We are introducing improved feature extraction method for effective speaker recognition from degraded human audio signal. This article presents experiment results of modified MFCC with Gaussian Mixture Model (GMM) on uniquely developed degraded human voice dataset. MFCC uses human audio signal and transforms it into a numerical value of audio characteristics, which is utilized to recognize speaker efficiently with the help of data science model. Experiment uses degraded human voice when high background noise comes with audio signal. Experiment also covers, Sampling Frequency (SF) impacts on human audio when "Signal to Noise Ratio" (SNR) is low (up to 1dB) in overall speaker identification process. With modified MFCC, we have observed improved speaker recognition when speaker voice SNR is upto 1dB due to high SF and low frequency range for mel-scale triangular filter.