“…• Results from any model updating [13] [ 11] Such as min, max, and median values at the top-10 and over-all [12] Such as recalibration, recalibration, predictor effects adjusted, or new predictors added [13] i.e., model specification, model performance 7. Model evaluation [15,42,44,49,64] • Evaluation Dataset/s [42,44,67] -Test and holdout data transparency information [44] -Dataset size information: Sample size [44,63,64], Rationale for the sample size [63] -Preprocessing techniques used [42] • Comparison between validation and development datasets [63,64] • Methods used to evaluate model performance, e.g., cross validation [64] • Performance measures results [15,42,43,44,49,63,64,69] • Rationale for performance measures [44] • Benchmarking against standard datasets [44,49] • Reliability analysis, e.g., baseline survival [64] • FAIR [57] • Third party performance verifications [44] • Concept drift [44] • Interpretation of results [63,64] -If objectives are met considering the results…”