“…Capitalises on the merits of individual methods Gradient boosting machines, Random forest [45], [75], [85] [86], [116], [187], [195] Clustering algorithms: k-Means [41], [50], [69], [71], [207], k-Medians, Expectation maximization [50], [73], Hierarchical clustering [41], [50], [207] Useful for making sense of data Results are sometimes difficult to interpret Very limited when dealing with unfamiliar datasets Dimensionality reduction algorithms: Principal component analysis [41], [64], [73], [79], [83], [131], [135], [165], Principal component regression [41], Partial least squares regression [21] Good for handling large datasets without necessarily making assumptions on data Not effective when dealing with non-linear data It is sometimes difficult to understand the meaning of the results…”