In this paper, a piano-assisted automated accompaniment system is designed and applied to a practical process using a heuristic dynamic planning approach. In this paper, we aim at the generation of piano vocal weaves in accompaniment from the perspective of assisting pop song writing, build an accompaniment piano generation tool through a set of systematic algorithm design and programming, and realize the generation of recognizable and numerous weaving styles within a controlled range under the same system. The mainstream music detection neural network approaches usually convert the problem into a similar way as image classification or sequence labelling and then use models such as convolutional neural networks or recurrent neural networks to solve the problem; however, the existing neural network approaches ignore the music relative loudness estimation subtask and ignore the inherent temporality of music data when solving the music detection task. However, the existing music generation neural network methods have not yet solved the problems of discrete integrability brought by piano roll representation music data and the still-limited control domain and variety of instruments generated in the controllable music generation task. To solve these two problems, this paper proposes a controlled music generation neural network model for multi-instrument polyphonic music. The effectiveness of the proposed model is verified by conducting several sets of experiments on the collected MIDICN data set, and the experimental results show that the model achieves better performance in the aspects of negative log-likelihood value, perplexity, musicality measure, domain similarity analysis, and manual evaluation.