“…Note that, to minimise bias, these comparisons are based on only those studies that have used the MIAS database [15], four-class classification, and using the same evaluation technique as in this study. Our method outperformed the methods in [3,4,[7][8][9][10][11][12] because (a) robust feature extraction operators are used which are able to capture richer micro-structure information using the three-value encoding technique and are less sensitive to noise, (b) the use of F GD roi minimises the texture similarity representation of the breast region hence resulting in more descriptive features across different BI-RADS classes, and (c) we are able to capture a wider range of texture rotation/variation by extracting features from eight different orientations. In breast imaging, deep learning based approaches are becoming popular due to its capability to learn complex appearances especially in the area of segmentation and classification.…”