Because traditional single-channel speech enhancement algorithms are sensitive to the environment and perform poorly, a speech enhancement algorithm based on attention-gated long short-term memory (LSTM) is proposed. To simulate human auditory perceptual characteristics, the algorithm divides the frequency band according to the Bark scale. Based on these bands, bark frequency cepstral coefficients (BFCCs), their derivative features and pitch-based features are extracted. Furthermore, considering that different noises have different influence on the clean speech, the attention mechanism is applied to screen out the information less polluted by noise, which is helpful to reconstruct the clean speech. To adaptively reallocate the power ratio of the speech and noise during the construction of the ratio mask, the ideal ratio mask (IRM) with the inter-channel correlation (ICC) is adopted as the learning target. In addition, to improve the performance of the network, the algorithm introduces a multiobjective learning strategy to jointly optimize the networks by using a voice activity detector (VAD). Subjective and objective experiments show that the proposed algorithm outperforms other baseline algorithms. In real-time experiment, the proposed algorithm maintains high real-time performance and fast convergence speed. INDEX TERMS Speech enhancement, long short-term memory, attention mechanism, bark scale.