Driver fatigue has been generally viewed as a critical road safety factor and has been cited for contributing to a good percentage of traffic accidents. Developing systems to monitor and alert drowsy drivers is essential in reducing incidents. This research proposes a robust framework for driver drowsiness detection using a CNN-LSTM architecture developed by fusing facial landmark analysis with multiple aspect ratios. It uses, as key metrics in detecting drowsiness, the Eye Aspect Ratio (EAR), Pupil Circularity (PUC), the Mouth Aspect Ratio (MAR), and the Mouth over Eye Aspect Ratio (MOE). CNN-LSTM had been trained on YawDD, NITYMD, FL3D, and custom datasets. Data augmentation techniques such as flipping, scaling, shearing, rotation, brightness and contrast adjustment are used to generalize under different illumination conditions and driver postures. The system is implemented on NVIDIA’s 128-core Jetson Nano GPU platform and does real-time processing of video frames captured by a CSI camera. It detects eye closure and yawning as symptoms of driver fatigue and immediately raises an alert through vibrations in the seatbelt and pre-recorded voice messages. The Internet connectivity allows remote monitoring via mobile applications, making it safer by ensuring that alerts reach the driver and passengers. This CNN-LSTM model has been carefully tested for various scenarios, including day and night conditions, proving its effectiveness. The proposed framework indicated excellent performance concerning accuracy (98%), precision (95%), recall (93%), F1 score (94%), and AUC (99%), thereby revealing its efficiency in practical scenarios. By incorporating EAR, MAR, PUC, and MOE for the early detection of drowsiness in this CNN-LSTM architecture, the system will be able to alert the driver ahead of time so that necessary precautions can be taken to avoid an accident. The proposed approach enhances the driver's safety and is a scalable solution that adapts to different environments and populations.