Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
Previous studies have shown that some combo of human cognitive biases is effective in machine learning. The well-used model of the biases is called loosely symmetric (LS) model. We show the efficiency and accuracy of our loosely symmetric model and its implementation of two cognitive biases, symmetry and mutual exclusively.In this study, we use loosely symmetric as a binary classifier to enhance its accuracy in small datasets.
We propose a human-cognition inspired classification model based on Naïve Bayes. Our previous study showed that humancognitively inspired heuristics is able to enhance the prediction accuracy of the text classifier based on Naïve Bayes. In the study, our classification model that addresses -dimensional feature vectors of both categories, showed higher performance than the conventional Naïve Bayes under specific conditions. In this paper, to investigate the mechanism that realizes the higher performance of classification, we further tested our model and its modified variant. As a result, our two models showed slightly different behaviors, but both of them achieved higher performance than the conventional Naïve Bayes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.