“…Adversarial attacks have been investigated in the area of image, audios, texts and recently in Windows executable files classification exercise and a number of successful attacks examples in image, audios and texts have been generated to cause misclassification [1][2][3][4][5][6][7]. The principal reason for the success in image, audios and texts is that their feature-space is comparatively fixed, an image or text can be formatted as a three-dimensional array of pixels with each pixel value as a three-dimensional RGB (red, green, blue) vector value ranged from 0 to 255, thus, is feasible to find an exact function that is differentiable, therefore, a feature-space attack built on gradients can instantly apply on text or images to create adversarial attack examples.…”