“…The most used performance metrics in the studied literature are given in Table X. The other rarely used performance metrics are maximum sum of the objective-values (MaxSum) [102], [99], maximum distance Dmax [175], [157], epsilon indicator [45], [134], average quality [157], [175], average hypervolume [105], [121], width measure (M2 metric) [147], system performance [148], solution quality [54], size of reference set [62], relative percentage difference between objective-values [210], relative and absolute quality [59], overall non-dominated vector generation (ONVG) [175], number of unscheduled tasks [163], number of generations [68], number of channel reuses, user level fairness, and stability of the communication mode of the D2D users [96], norm based pure diversity metric [72], modified mean ideal distance measure [127], maximum Pareto front error [201], M1,M2,M3 [110], k-distance [198], hole-relative size metric, µ distance metric [73], error, ratio, distance based measure, fairness [158], empirical attainment function [45], data envelopment analysis [83], data dependency threshold [68], convex hull of the approximated efficient frontier [59], convergent metric [140], capacity measure [72], average rank index, average crowding distance and the mapping pattern of solutions [89], convergent rate metric [85], infeasibility metric [85]…”