In the era of big data, analyzing and extracting knowledge from large-scale data sets is a very interesting and challenging task. The application of standard data mining tools in such data sets is not straightforward. Hence, a new class of scalable mining method that embraces the huge storage and processing capacity of cloud platforms is required. In this work, we propose a novel distributed partitioning methodology for prototype reduction techniques in nearest neighbor classification. These methods aim at representing original training data sets as a reduced number of instances. Their main purposes are to speed up the classification process and reduce the storage requirements and sensitivity to noise of the nearest neighbor rule. However, the standard prototype reduction methods cannot cope with very large data sets. To overcome this limitation, we develop a MapReduce-based framework to distribute the functioning of these algorithms through a cluster of computing elements, proposing several algorithmic strategies to integrate multiple partial solutions (reduced sets of prototypes) into a single one. The proposed model enables prototype reduction algorithms to be applied over big data classification problems without significant accuracy loss. We test the speeding up capabilities of our model with data sets up to 5.7 millions of Email addresses: triguero@decsai.ugr.es (Isaac Triguero), dperalta@decsai.ugr.es (Daniel Peralta), jaume.bacardit@newcastle.ac.uk (Jaume Bacardit), sglopez@ujaen.es (Salvador GarcĂa), herrera@decsai.ugr.es (Francisco Herrera) March 3, 2014 instances. The results show that this model is a suitable tool to enhance the performance of the nearest neighbor classifier with big data.
Preprint submitted to Neurocomputing