Speaker identification in challenging acoustic environments, influenced by noise, reverberation, and emotional fluctuations, requires improved feature extraction techniques. Although existing methods effectively extract distinct acoustic features, they show limitations in these adverse settings. To overcome these limitations, we propose the Temporal Context-Enhanced Features (TCEF) approach, which provides a consistent audio representation for better performance under various acoustic conditions. TCEF leverages a context window to average features in adjacent frames, effectively reducing short-term variations caused by noise, reverberation, fluctuations in emotional speech, and those in neutral recordings. This approach improves the distinctive features of a speaker voice, improving speaker identification in challenging and neutral acoustic environments. To evaluate the performance of TCEF against conventional features, One-Dimensional Convolutional Neural Network (1D-CNN) was used for a detailed frame-level analysis and Long Short-Term Memory (LSTM) for a comprehensive sequence-level analysis. We used four datasets to assess the effectiveness of the TCEF approach. The GRID and RAVDESS datasets represent neutral and emotional speech, respectively. To test the robustness of our system under adverse acoustic conditions, we created two additional datasets: GRID-NR and RAVDESS-NR. These are modified versions of the original GRID and RAVDESS, incorporating added noise and reverberation. Performance evaluation results showed that TCEF significantly outperformed existing feature extraction methods in identifying speakers in diverse acoustic environments.INDEX TERMS Speaker identification, feature extraction, challenging acoustic environments, temporal context-enhanced features, convolutional neural networks, long short-term memory.