ObjectivesApproximately 80% of people with epilepsy live in low- and middle-income countries (LMICs), where limited resources and stigma hinder accurate diagnosis and treatment. Clinical machine learning models have demonstrated substantial promise in supporting the diagnostic process in LMICs without relying on specialised or trained personnel. How well these models generalise to naïve regions is, however, underexplored. Here, we use a novel approach to assess the suitability and applicability of such clinical tools for diagnosing active convulsive epilepsy in settings beyond their original training contexts.MethodsWe sourced data from the Study of Epidemiology of Epilepsy in Demographic Sites dataset, which includes demographic information and clinical variables related to diagnosing epilepsy across five sub-Saharan African sites. For each site, we developed a region-specific (single-site) predictive model for epilepsy and evaluated its performance on other sites. We then iteratively added sites to a multi-site model and evaluated its performance on the omitted regions. Model performances and parameters were then compared across every permutation of sites. We used a leave-one-site-out cross-validation analysis to assess the impact of incorporating individual site data in the model.ResultsSingle-site clinical models performed well within their own regions, but worse in general when evaluated on other regions (p<0.05). Model weights and optimal thresholds varied markedly across sites. When the models were trained using data from an increasing number of sites, mean internal performance decreased while external performance improved.ConclusionsClinical models for epilepsy diagnosis in LMICs demonstrate characteristic traits of ML models, such as limited generalisability and a trade-off between internal and external performance. The relationship between predictors and model outcomes also varies across sites, suggesting the need to update specific aspects of the model with local data before broader implementation. Variations are likely to be specific to the cultural context of diagnosis. We recommend developing models adapted to the cultures and contexts of their intended deployment and caution against deploying region- and culture-naïve models without thorough prior evaluation.Key pointsMachine learning-driven clinical tools are becoming more prevalent in low-resource settings; however, their general performance across regions is not fully established. Given their potential impact, it is crucial models are robust, safe and appropriately deployedModels perform poorly when making predictions for regions that were not included in their training data, as opposed to sites that wereModels trained on different regions can have different optimal parameters and thresholds for performance in practiceThere is a trade-off between internal and external performance, where a model with better external performance usually has worse internal performance but is generally more robust overallSEEDS collaboratorsAgincourt HDSS, South Africa: Ryan Wagner, Rhian Twine, Myles Connor, F. Xavier Gómez-Olivé, Mark Collinson (and INDEPTH Network, Accra, Ghana), Kathleen Kahn (and INDEPTH Network, Accra, Ghana), Stephen Tollman (and INDEPTH Network, Accra, Ghana)Ifakara HDSS, Tanzania: Honratio Masanja (and INDEPTH Network, Accra, Ghana), Alexander MathewIganga/Mayuge HDSS, Uganda: Angelina Kakooza, George Pariyo, Stefan Peterson (and Uppsala University, Dept of Women’s and Children’s Health, IMCH; Karolinska Institutet, Div. of Global Health, IHCAR; Makerere University School of Public Health), Donald NdyomughenyiKilifi HDSS, Kenya: Anthony K Ngugi, Rachael Odhiambo, Eddie Chengo, Martin Chabi, Evasius Bauni, Gathoni Kamuyu, Victor Mung’ala Odera, James O Mageto, Isaac Egesa, Clarah Khalayi, Charles R NewtonKintampo HDSS, Ghana: Ken Ae-Ngibise, Bright Akpalu, Albert Akpalu, Francic Agbokey, Patrick Adjei, Seth Owusu-Agyei, Victor Duko (and INDEPTH Network, Accra, Ghana)London School of Hygiene and Tropical Medicine: Christian Bottomley, Immo KleinschmidtInstitute of Psychiatry, King’s College London: Victor CK DokuUCL Queen Square Institute of Neurology, London: Josemir W SanderSwiss Tropical Institute: Peter Odermatt