“…To further demonstrate how newly developed models "outperformed existing ones" (with few exceptions [ 44 , 135 , 136 ]), the performance metrics are often compared with default or baseline models, and other state-of-the art approaches [ 2 , 27 , 33 , 62 , 122 , 134 , 135 , 152 , 165 , 168 , 176 , 177 , 211 , 218 ]. In addition to performance reports, a number of these papers foregrounded methodological contributions such as: new approaches to data labelling [ 168 , 211 ] and feature extraction [ 112 ]; the inclusion of timedependent features [ 33 , 165 , 192 , 211 , 218 ]; improvements to data representations [ 177 ] and data integration [ 27 ]; and strategies to optimize data collection (periods) [ 152 , 128 , 218 ]. Building on these results, authors often concluded how their work presented a "proof-of-concept" that showed "the potential" of using a particular technology [ 27 , 50 , 153 ], data source [ 57 , 62 , 64 , 112 , 128 , 134 , 165 , 176 , 218 ], or algorithm [ 44 ] for understanding, detecting or inferring a relationship with mental health.…”