The unidirectional LSTM based music generation model does not take into account the influence of future information when generating music. It solely focuses on learning the dependencies of the current moment on past information, resulting in music with poor stability and subpar quality. To address this issue, we have developed a music generation model based on bidirectional LSTM. During the training phase, this model effectively captures musical information from both past and future time steps, resulting in a probability distribution of musical elements that closely approximates real-world music. This, in turn, leads to enhanced structural stability and improved music quality in the generated compositions. Finally, we conducted validation experiments on our proposed approach, and the results unequivocally demonstrate its effectiveness.