ObjectiveThis study investigated whether artificial intelligence (AI) models combining voice signals, demographics, and structured medical records can detect glottic neoplasm from benign voice disorders.MethodsWe used a primary dataset containing 2–3 s of vowel “ah”, demographics, and 26 items of structured medical records (e.g., symptoms, comorbidity, smoking and alcohol consumption, vocal demand) from 60 patients with pathology‐proved glottic neoplasm (i.e., squamous cell carcinoma, carcinoma in situ, and dysplasia) and 1940 patients with benign voice disorders. The validation dataset comprised data from 23 patients with glottic neoplasm and 1331 patients with benign disorders. The AI model combined convolutional neural networks, gated recurrent units, and attention layers. We used 10‐fold cross‐validation (training–validation–testing: 8–1–1) and preserved the percentage between neoplasm and benign disorders in each fold.ResultsResults from the AI model using voice signals reached an area under the ROC curve (AUC) value of 0.631, and additional demographics increased this to 0.807. The highest AUC of 0.878 was achieved when combining voice, demographics, and medical records (sensitivity: 0.783, specificity: 0.816, accuracy: 0.815). External validation yielded an AUC value of 0.785 (voice plus demographics; sensitivity: 0.739, specificity: 0.745, accuracy: 0.745). Subanalysis showed that AI had higher sensitivity but lower specificity than human assessment (p < 0.01). The accuracy of AI detection with additional medical records was comparable with human assessment (82% vs. 83%, p = 0.78).ConclusionsVoice signal alone was insufficient for AI differentiation between glottic neoplasm and benign voice disorders, but additional demographics and medical records notably improved AI performance and approximated the prediction accuracy of humans.Level of EvidenceNA Laryngoscope, 2024