Recognizing sign language plays a crucial role in improving communication accessibility for the Deaf and hard-of-hearing communities. In Korea, many individuals facing hearing and speech challenges depend on Korean Sign Language (KSL) as their primary means of communication. Many researchers have been working to develop a sign language recognition system for other sign languages, but little research has been done for KSL alphabet recognition. However, existing KSL recognition systems have faced significant performance limitations due to the ineffectiveness of the features. To address these issues, we introduce an innovative KSL recognition system employing a strategic fusion approach. In this study, we combined joint skeleton-based handcrafted features and pixel-based resnet101 transfer learning features to overcome the limitations of traditional systems. Our proposed system consists of two distinct streams: the first stream extracts essential handcrafted features, placing emphasis on capturing hand orientation information within KSL gestures. In the second stream, concurrently, we employed a deep learning-based resnet101 module stream to capture hierarchical representations of the KSL alphabet sign. By combining essential information from the first stream with the hierarchical features from the second stream, we generate multiple levels of fused features with the goal of forming a comprehensive representation of KSL gestures. Finally, we fed the concatenated feature into the deep learning-based classification module for the classification. We conducted extensive experiments with the newly created KSL alphabet dataset, the existing KSL digit and the existing ArSL and ASL benchmark datasets. Our proposed model undeniably shows that our fusion approach substantially improves high-performance accuracy in both cases, which proves the system's superiority.INDEX TERMS Korean sign language (KSL), hand gesture recognition, geometric feature, distance feature, angle feature, ResNet.