Neurological disorders affect bodily functions and there is a growing need for advanced solutions to offer therapeutic advice to patients. A framework using artificial intelligence and machine learning techniques has been proposed to address this. The proposed system employs a combination of RGB, inertial, and depth sensors data. The inertial signals have been filtered using a notch filter to obtain the optimal wearable sensor data by examining the upper and lower cutoff frequencies. Multiple features have been calculated, including mel frequency cepstral coefficients (MFCC), statistical features and Gaussian mixture model (GMM) features. On the other hand, silhouettes have been extracted from RGB and depth images, and four crucial parameters have been employed to gauge the level of accuracy with which patients with neurological disorders performed their activities, including the angle formation between the hands and lower/upper half of the body and center of the body, and the angle between the hands of the silhouette. The resulting features have been fused and classified using principle component analysis and a reweighted genetic algorithm. Evaluation using cross-validation on mRI (multi-modal 3d human pose estimation dataset using mmwave, RGB-D, and inertial sensors) and MHEALTH (mobile health) datasets showed a recognition accuracy rate of 97.29\% and a 97.94\%. The study highlights the need for more datasets to address challenges in rehabilitation using human activity recognition with multi-modal sensors.