“…The first work in this direction was based on using pre-trained networks such as AlexNet and VGG18, in which the features are fused and classified to detect a morphing attack [93]. Following this, several deep CNN pre-trained networks, such as AlexNet, VGG19, VGG-Face16, GoogleNet, ResNet18, ResNet150, ResNet50, VGG-Face2 and OpenFace [52], [43], [122], [112], [107], [108], [113], [71], [112], have been explored. Although deep CNNs have shown better performance than hand-crafted texture descriptor-based MAD methods on both digital and print-scan data, the generalisation capability of these approaches is limited across different print and scan datasets [110].…”