Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right of every human. However, it is challenging to extract potential features from 1D multi-sensor data. Thus, this research focuses on extracting distinguishable patterns and deep features from spectral images by time-frequency-domain analysis of 1D multi-sensor data. Wearable sensor data, particularly accelerator and gyroscope data, act as input signals of different daily activities, and provide potential information using time-frequency analysis. This potential time series information is mapped into spectral images through a process called use of ’scalograms’, derived from the continuous wavelet transform. The deep activity features are extracted from the activity image using deep learning models such as CNN, MobileNetV3, ResNet, and GoogleNet and subsequently classified using a conventional classifier. To validate the proposed model, SisFall and PAMAP2 benchmark datasets are used. Based on the experimental results, this proposed model shows the optimal performance for activity recognition obtaining an accuracy of 98.4% for SisFall and 98.1% for PAMAP2, using Morlet as the mother wavelet with ResNet-101 and a softmax classifier, and outperforms state-of-the-art algorithms.