The pharmacokinetic variability of lamotrigine (LTG) plays a significant role in its dosing requirements. Our goal here was to use noninvasive clinical parameters to predict the dose-adjusted concentrations (C/D ratio) of LTG based on machine learning (ML) algorithms. A total of 1141 therapeutic drug-monitoring measurements were used, 80% of which were randomly selected as the "derivation cohort" to develop the prediction algorithm, and the remaining 20% constituted the "validation cohort" to test the finally selected model. Fifteen ML models were optimized and evaluated by tenfold cross-validation on the "derivation cohort,” and were filtered by the mean absolute error (MAE). On the whole, the nonlinear models outperformed the linear models. The extra-trees’ regression algorithm delivered good performance, and was chosen to establish the predictive model. The important features were then analyzed and parameters of the model adjusted to develop the best prediction model, which accurately described the C/D ratio of LTG, especially in the intermediate-to-high range (≥ 22.1 μg mL−1 g−1 day), as illustrated by a minimal bias (mean relative error (%) = + 3%), good precision (MAE = 8.7 μg mL−1 g−1 day), and a high percentage of predictions within ± 20% of the empirical values (60.47%). This is the first study, to the best of our knowledge, to use ML algorithms to predict the C/D ratio of LTG. The results here can help clinicians adjust doses of LTG administered to patients to minimize adverse reactions.