Both human face recognition and generation by machines are currently an active area of computer vision, drawing curiosity of researchers, capable of performing amazing image analysis, and producing applications in multiple domains. In this paper, we propose a new approach for face attributes classification (FAC) taking advantage from both binary classification and data augmentation. With binary classification we can reach high prediction scores, while augmented data prevent overfitting and overcome the lack of data for sketched photos. Our approach, named Augmented binary multilabel CNN (ABM-CNN), consists of three steps: i) splitting data; ii) transformed-it to sketch (simplification process); iii) train separately each attribute with two convolutional neural networks; the whole process includes two networks: the first (resp. the second) one is to predict attributes on real images (resp. sketches) as inputs. Through experimentation, we figure out that some attributes give high prediction rates with sketches rather than with real images. On the other hand, we build a new face dataset, more consistent and complete, by generating images using Style-GAN model, to which we apply our method for extracting face attributes. As results, our proposal demonstrates more performances compared to those of related works.