By utilizing variational autoencoder (VAE) architectures in musical instrument digital interface (MIDI)-based generative neural networks (GNNs), this study explores the field of creative music composition. The study evaluates the success of VAEs in generating musical compositions that exhibit both structural integrity and a resemblance to authentic music. Despite achieving convergence in the latent space, the degree of convergence falls slightly short of initial expectations. This prompts an exploration of contributing factors, with a particular focus on the influence of training data variation. The study acknowledges the optimal performance of VAEs when exposed to diverse training data, emphasizing the importance of sufficient intermediate data between extreme ends. The intricacies of latent space dimensions also come under scrutiny, with challenges arising in creating a smaller latent space due to the complexities of representing data in N dimensions. The neural network tends to position data further apart, and incorporating additional information necessitates exponentially more data. Despite the suboptimal parameters employed in the creation and training process, the study concludes that they are sufficient to yield commendable results, showcasing the promising potential of MIDI-based GNNs with VAEs in pushing the boundaries of innovative music composition.