The early detection of esophageal cancer presents a substantial difficulty, which contributes to its status as a primary cause of cancer-related fatalities. This study used You Only Look Once (YOLO) frameworks, specifically YOLOv5 and YOLOv8, to predict and detect early-stage EC by using a dataset sourced from the Division of Gastroenterology and Hepatology, Ditmanson Medical Foundation, Chia-Yi Christian Hospital. The dataset comprised 2741 white-light images (WLI) and 2741 hyperspectral narrowband images (HSI-NBI). They were divided into 60% training, 20% validation, and 20% test sets to facilitate robust detection. The images were produced using a conversion method called the spectrum-aided vision enhancer (SAVE). This algorithm can transform a WLI into an NBI without requiring a spectrometer or spectral head. The main goal was to identify dysplasia and squamous cell carcinoma (SCC). The model’s performance was evaluated using five essential metrics: precision, recall, F1-score, mAP, and the confusion matrix. The experimental results demonstrated that the HSI model exhibited improved learning capabilities for SCC characteristics compared with the original RGB images. Within the YOLO framework, YOLOv5 outperformed YOLOv8, indicating that YOLOv5’s design possessed superior feature-learning skills. The YOLOv5 model, when used in conjunction with HSI-NBI, demonstrated the best performance. It achieved a precision rate of 85.1% (CI95: 83.2–87.0%, p < 0.01) in diagnosing SCC and an F1-score of 52.5% (CI95: 50.1–54.9%, p < 0.01) in detecting dysplasia. The results of these figures were much better than those of YOLOv8. YOLOv8 achieved a precision rate of 81.7% (CI95: 79.6–83.8%, p < 0.01) and an F1-score of 49.4% (CI95: 47.0–51.8%, p < 0.05). The YOLOv5 model with HSI demonstrated greater performance than other models in multiple scenarios. This difference was statistically significant, suggesting that the YOLOv5 model with HSI significantly improved detection capabilities.