Human Action Recognition (HAR) is a vital area of computer vision with diverse applications in security, healthcare, and human-computer interaction. Addressing the challenges of HAR, particularly in dynamic and complex environments, is essential to advancing this field. The Human Actions in Diverse Environments (HADE) framework introduced in this paper represents a significant advancement in improving the capabilities of Convolutional Neural Networks (CNNs) for effective HAR. The strength of the HADE framework is its carefully curated dataset, primarily derived from smartphone camera footage. This dataset encompasses a wide range of human actions captured in various settings, providing a robust foundation for training our novel HAR models HADE I and HADE II. These models have been specifically designed and optimized for parallel processing on GPUs, showing significant improvements in the efficiency of both training and inference processes. Through a comprehensive evaluation, the HADE framework demonstrated a remarkable improvement in HAR accuracy, achieving 83.57% on our custom dataset. This marks a considerable enhancement over the existing methodologies and underscores the efficacy of the HADE approach in accurately interpreting complex human actions. Particularly noteworthy is the framework's potential applicability in healthcare, in the domain of neurological patient care, where it can aid in early detection and facilitate personalized treatment plans. Future research will focus on expanding the range of actions covered by HAR and exploring avenues for real-time processing. The introduction of the HADE framework not only makes a substantial contribution to the field of computer vision but also paves the way for its practical application across various sectors.