The early detection and precise diagnosis of gastrointestinal diseases, particularly gastric cancer, play a vital role in improving patient survival rates and treatment outcomes. However, diagnosing these conditions can be challenging when symptoms are mild or absent. Endoscopy is commonly used for diagnosis, but it requires endoscopists with a high level of specialized knowledge to accurately identify diseases. By integrating artificial intelligence (AI) with endoscopic imaging, AI can assist in diagnosis, reduce missed cases, and enable early treatment, thereby improving patient survival rates. Previous studies have mainly focused on improving disease classification and accuracy, often overlooking disproportionate and limited medical data reliability issues. In this study, we propose a solution to address the challenges posed by imbalanced and sparse medical image data by introducing model-agnostic meta-learning (MAML).To accomplish this, we employ the YOLO-MR model, which incorporates the concept of meta-recognition into the You Only Look Once (YOLO) model. Experimental results demonstrate that the mean average precision (mAP) of the conventional YOLO model is low at 41.7, indicating a significant impact of data imbalance. Traditional data augmentation methods provide a low mAP of 65.2, while the proposed YOLO-MR model achieves an impressive mAP of 96. This is significantly higher than the conventional YOLO model by 54.3, reducing accuracy disparities between different disease classes and addressing the issue of data imbalance. Furthermore, this research showcases the effectiveness of innovative techniques such as MAML and residual blocks in addressing data imbalance in medical image recognition. These findings hold substantial potential for tackling the challenges posed by limited and imbalanced medical data in the healthcare field.INDEX TERMS Endoscopy, you only look once (YOLO), meta-learning, model-agnostic meta-learning (MAML), residual block