“…As an input to machine learning models, most often bag-of-words (BOW) text representations were applied ( N = 30/89, 33.7%) [ 32 , 41 , 52 , 54 – 56 , 59 , 61 , 68 , 72 , 82 , 84 , 85 , 87 , 89 , 92 , 93 , 95 , 96 , 100 , 106 , 108 , 110 , 112 , 114 , 115 , 119 – 122 ], followed by term-frequency/inverse document frequency (TF-IDF) ( N = 16/89, 18.0%) [ 45 , 53 , 57 , 60 , 63 , 66 , 68 , 73 , 76 , 83 , 91 , 109 , 115 , 116 , 122 , 123 ], topic models ( N = 10/89, 11.2%) [ 45 , 60 , 84 , 86 , 91 , 93 , 104 , 107 , 109 , 115 , 123 ], keywords ( N = 9, 10.1%) [ 52 , 75 , 76 , 91 , 98 , 100 , 117 , 123 , 127 ], standardized terms such as Medical Subject ...…”