Recent research in the field of cognitive motor action decoding focuses on data acquired from Functional Near-Infrared Spectroscopy (fNIRS) and its analysis. This research aims to classify two different motor activities, namely, mental drawing (MD) and spatial navigation (SN), using fNIRS data from non-motor baseline data and other motor activities. Accurate activity detection in non-stationary signals like fNIRS is challenging and requires complex feature descriptors. As a novel framework, a new feature generation by fusion of wavelet feature, Hilbert, symlet, and Hjorth parameters is proposed for improving the accuracy of the classification. This new fused feature has statistical descriptor elements, time-localization in the frequency domain, edge feature, texture features, and phase information to detect and locate the activity accurately. Three types of independent component analysis, including FastICA, Picard, and Infomax were implemented for preprocessing which removes noises and motion artifacts. Two independent binary classifiers are designed to handle the complexity of classification in which one is responsible for mental drawing (MD) detection and the other one is spatial navigation (SN). Four different types of algorithms including nearest neighbors (KNN), Linear Discriminant Analysis (LDA), light gradient-boosting machine (LGBM), and Extreme Gradient Boosting (XGBOOST) were implemented. It has been identified that the LGBM classifier gives high accuracies—98% for mental drawing and 97% for spatial navigation. Comparison with existing research proves that the proposed method gives the highest classification accuracies. Statistical validation of the proposed new feature generation by the Kruskal–Wallis H-test and Mann–Whitney U non-parametric test proves the reliability of the proposed mechanism.