Objective To assess the quality, reliability, readability, and similarity of the data that a recently created NLP-based artificial intelligence model ChatGPT 4 provides to users in Cleft Lip and Palate (CLP)-related information. Design In the evaluation of the responses provided by the OpenAI ChatGPT to the CLP-related 50 questions, several tools were utilized, including the Ensuring Quality Information for Patients (EQIP) tool, Reliability Scoring System (Adapted from DISCERN), Flesh Reading Ease Formula (FRES) and Flesch-Kinkaid Reading Grade Level (FKRGL) formulas, Global Quality Scale (GQS), and Similarity Index with plagiarism-detection tool. Jamovi (The Jamovi Project, 2022, version 2.3; Sydney, Australia) software was used for all statistical analyses. Results Based on the reliability and GQS values, ChatGPT demonstrated high reliability and good quality attributable to CLP. Furthermore, according to the FRES results, ChatGPT's readability is difficult, and the similarity index values of this software exhibit an acceptable level of similarity ratio. There is no significant difference in EQIP, Reliability Score System, FRES, FKGRL, GQS, and Similarity Index values among the two categories. Conclusion OpenAI ChatGPT provides a highly reliable, high-quality, but challenging to read, and acceptable similarity rate in providing information related to CLP. Ensuring that information obtained through these models is verified and assessed by a qualified medical expert is crucial.