Cochlear implants (CIs) rely on coding strategies like the advanced combination encoder (ACE) to process auditory signals, but these methods face limitations in accurately reconstructing intelligible speech in noisy environments. Addressing this challenge, this paper presents a deep learning (DL) model, designed to predict electrodograms generated by the ACE coding strategy for CIs. The model utilizes a temporal convolutional network (TCN) to capture the temporal dependencies of auditory signals, combined with convolutional layers to encode audio features. The TIMIT dataset was used to train the model, transforming .wav audio files into electrodograms via the ACE strategy. Performance evaluation was based on the short-time objective intelligibility (STOI) metric, with an additional assessment through word recognition score (WRS). The proposed model achieved an STOI score of 0.6542 and a WRS of 66.83, indicating competitive results compared to traditional strategies. This study enhances CI coding strategies through artificial intelligence (AI)-driven approaches, offering a framework for improved auditory signal processing.