In this century, Artificial Intelligence AI has gained lot of popularity because of the performance of the AI models with good accuracy scores. Natural Language Processing NLP which is a major subfield of AI deals with analysis of huge amounts of Natural Language data and processing it. Text Summarization is one of the major applications of NLP. The basic idea of Text Summarization is, when we have large news articles or reviews and we need a gist of news or reviews with in a short period of time then summarization will be useful. Text Summarization also finds its unique place in many applications like patent research, Help desk and customer support. There are numerous ways to build a Text Summarization Model but this paper will mainly focus on building a Text Summarization Model using seq2seq architecture and TensorFlow API.
Understanding predictions made by Machine Learning models is critical in many applications. In this work, we investigate the performance of two methods for explaining tree-based models: 'Tree Interpreter (TI)' and 'SHapley Additive exPlanations TreeExplainer (SHAP-TE)'. Using a case study on detecting anomalies in job runtimes of applications that utilize cloud-computing platforms, we compare these approaches using a variety of metrics, including computation time, significance of attribution value, and explanation accuracy. We find that, although the SHAP-TE offers consistency guarantees over TI, at the cost of increased computation, consistency does not necessarily improve the explanation performance in our case study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.