“…We have also noticed that l mse function is also utilized in 6 different studies followed by l mae , the function being used in 2 studies only. [20,22,44,45,48,51,58,62,72,76,79,84,85], [86, 92, 94, 96-98, 117, 125, 150, 151] The binary cross-entropy loss function evaluates each projected probability against the actual class outcome, which is either 0 or 1 L4 l dice [27, 51, 64, 66, 67, 90, 94, 95, 117, 120, 122][130, 141, 144, 145, 149] The term dice loss comes from the Sørensen-Dice coefficient, a statistic created in the 1940s to determine the similarity of two samples L5 l cyc [106]. [140] The element-wise loss is employed to ensure self-similarity during cyclic transformation when an unaligned training pair is provided L6 l r p [22] Regional perceptual loss function L7 l image [91] The element-wise data fidelity loss in the image domain is used to validate structure similarity to the target when an aligned training pair is provided [101] Perceptual loss function L9 l tvl1 [101] The loss function is based on TVL1 algorithm L10 l al [89] The multiple paths of loss are penalized differently in an asymmetric loss function L11 l acc [89] The function is based on achieved accuracy from the confusion matrix L12 l f l [77] The focal loss function is utilized to address class imbalance issues during the network training L13 l mse [48,66,68,69,81,1...…”