This paper presents a novel approach to fingerspelling recognition in real-time, utilizing a twodimensional Convolutional Neural Network (2D CNN). Existing recognition systems often fall short in real-world conditions due to variations in illumination, background, and user-specific characteristics. Our method addresses these challenges, delivering significantly improved performance. Leveraging a robust 2D CNN architecture, the system processes image sequences representing the dynamic nature of fingerspelling. We focus on low-level spatial features and temporal patterns, thereby ensuring a more accurate capture of the intricate nuances of fingerspelling. Additionally, the incorporation of real-time video feed enhances the system's responsiveness. We validate our model through comprehensive experiments, showcasing its superior recognition rate over current methods. In scenarios involving varied lighting, different backgrounds, and distinct user behaviors, our system consistently outperforms. The findings demonstrate that the 2D CNN approach holds promise in improving fingerspelling recognition, thereby aiding communication for the hearing-impaired community. This work paves the way for further exploration of deep learning applications in real-time sign language interpretation. This research bears profound implications for accessibility and inclusivity in communication technology.