Today, Artificial Intelligence is achieving prodigious real-time performance, thanks to growing computational data and power capacities. However, there is little knowledge about what system results convey; thus, they are at risk of being susceptible to bias, and with the roots of Artificial Intelligence (“AI”) in almost every territory, even a minuscule bias can result in excessive damage. Efforts towards making AI interpretable have been made to address fairness, accountability, and transparency concerns. This paper proposes two unique methods to understand the system’s decisions aided by visualizing the results. For this study, interpretability has been implemented on Natural Language Processing-based sentiment analysis using data from various social media sites like Twitter, Facebook, and Reddit. With Valence Aware Dictionary for Sentiment Reasoning (“VADER”), heatmaps are generated, which account for visual justification of the result, increasing comprehensibility. Furthermore, Locally Interpretable Model-Agnostic Explanations (“LIME”) have been used to provide in-depth insight into the predictions. It has been found experimentally that the proposed system can surpass several contemporary systems designed to attempt interpretability.
There are various diseases associated with the human liver, some of which are hard to detect using just the information exchanged between a patient and a doctor. Motivated by the vast potential of AI in medicine, in this study, we attempted to find a model which can predict the occurrence of liver disease in a given patient with the highest accuracy, based on different input factors. A dataset was chosen to train and test this model; Indian Liver Patient Dataset obtained from UCI ML Repository. We implemented different machine learning and deep learning algorithms (Multi-Layer Perceptron, Stochastic Gradient Descent, Restricted Boltzmann Machine with Logistic Regression, Support Vector Machines, and Random Forest) and filtered out the DL-based MLP (Multi-Layer Perceptron) model as the one providing the highest Accuracy, which was compared for each model along with the Precision, Recall and f1 scores. This research aims to impart insight additional to the current state-of-the-art discoveries by focusing on a comparative analysis of some of the best ML/DL techniques which haven’t been scrutinized altogether yet.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.