“…Table 4 provides a descriptive summary of the data scaling, feature selection, and hybrid model categories identified in the sample of articles each year, with responses indicated as either “yes” or “no.” The use of preprocessing techniques is quite common, particularly for neural networks (Qi, 2002; Tseng et al, 2012), and there are numerous ways to do so. The most widely used techniques include normalization using min‐max (Bhowmick et al, 2019; Das & Padhy, 2018; Das et al, 2015; Hajibabaei et al, 2014; Hegde et al, 2018; Shrivastava & Panigrahi, 2011) or normal distribution approximation (Van Gestel et al, 2006; Yazdani‐Chamzini et al, 2012), also referred to as standardization (McNally et al, 2018), as well as softmax function (Jiang et al, 2018), log‐transform (Gradojevic & Yang, 2006; Kumar, 2010; Kumar & Thenmozhi, 2012; Matyjaszek et al, 2019), differences (Cao & Tay, 2001; Gradojevic & Yang, 2006), and those articles that do not specify any data transformation. Among these, min‐max normalization is the most prevalent in the articles.…”