We propose a new self-explainable model for Natural Language Processing (NLP) text classification tasks. Our approach constructs explanations concurrently with the formulation of classification predictions. To do so, we extract a rationale from the text, then use it to predict a concept of interest as the final prediction. We provide three types of explanations: 1) rationale extraction, 2) a measure of feature importance, and 3) clustering of concepts. In addition, we show how our model can be compressed without applying complicated compression techniques. We experimentally demonstrate our explainability approach on a number of well-known text classification datasets.
-Face spoofing attack is one of the recent security traits that face recognition systems are proven to be vulnerable to. The spoofing occurs when an attacker bypass the authentication scheme by presenting a copy of the face image for a valid user. Therefore, it's very easy to perform face recognition spoofing attack with compare to other biometrics. This paper, addresses the problem of detecting imposter face image from live image. In practically, we address this problem from texture analysis point of view because the printed face usually has less quality defect that can be observed by extracting texture features. We adopt Local graph structure LGS to extract the features. Moreover, LGS is based on applying a dominant graph into the input image and it's proved to be a powerful texture operator. Finally, extensive experimental analysis on NUAA showed an encouraging performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.