Analyzing public surveillance videos has become an important research area as it is linked to different real-world applications. Video Analytics for human action recognition is given significance due to its utility. However, it is very challenging to analyze live-streaming videos to identify human actions across the frames in the video. The literature showed that Convolutional Neural Networks (CNNs) are among computer vision applications' most popular deep learning algorithms. Another important observation is that Generative Adversarial Network(GAN) architecture with deep learning has the potential to leverage effectiveness in applications using computer vision. Inspired by this finding, we created a GAN-based framework (called HARGAN) in this research for human activity identification from surveillance films. The framework exploits a retrained deep learning model known as ResNet50 and convolutional LSTM for better performance in action recognition. Our framework has two critical functionalities: feature learning and human action recognition. The ResNet50 model achieves the former, while the GAN-based convolutional LSTM model achieves the latter. We proposed an algorithm called the Generative Adversarial Approach for Human Action Recognition (GAA-HAR) to realize the framework. We used a benchmark dataset known as UCF50, which is extensively used in studies on human action identification. Based on our experimental findings, the suggested framework performs better than the current baseline models like CNN, LSTM, and convolutional LSTM, with the highest accuracy of 97.73%. Our framework can be used in video analytics applications linked to large-scale public surveillance.