“…When evaluating a classification system, ideally, different data are used for training and testing, as in [25,38,51,55,56,64,66,67,85,96,97,99,100], in order to prevent model performance overestimation that can occur when the model is evaluated using data used in model training. Cross-validation is often used when the dataset size is limited, as done in [24,31,32,33,39,42,43,45,52,54,58,74,75,81,86,87,88,89,90,98]. For FOG research, leave-one-person-out cross-validation was the most common.…”