Audio steganography (AS) uses the auditory redundancy of the human ear to conceal the hidden message inside the audio track. In recent studies, deep learning-based steganalysis has swiftly revealed AS by extracting high-dimensional stego acoustic features for categorization. There is still an opportunity for improvement in the current audio steganography required for managing communication confidentiality, access control and data protection. The main objective of this research is to improving the data protection by identifying the data embedding location in the audio. Generative Adversarial Network-based Audio Steganography Framework (GAN-ASF) is presented in this study, and it can automatically learn to provide better cover audio for message embedding. The suggested framework's training architecture comprises a generator, a discriminator, and a steganalyzer learned using deep learning. The Least Significant Bit Matching (LSBM) message embedding technique encrypts the secret message into the steganographic cover audio, which is then forwarded to a trained steganalyzer for misinterpretation as cover audio. After performing the training, stenographic cover audio has been generated for encoding the secret message. Here, Markov model of co-frequency sub images to generate the best cover frequency sub-image to locate an image's hidden payload. Steganographic cover audio created by GAN-ASF has been tested and found to be of excellent quality for embedding messages. The suggested method's detection accuracy is lower than that of the most current state-of-the-art deep learning-based steganalysis. This payload placement approach has considerably increased stego locations' accuracy in low frequencies. The test results GAN-ASF achieves a performance ratio of 94.5 %, accuracy ratio of 96.2 %, an error rate of 15.7 %, SNR 24.3 %, and an efficiency ratio of 94.8 % compared to other methods.