The set covering problem (SCP) is a well known classic combinatorial NP-hard problem, having practical application in many fields. To optimize the objective function of the SCP, many heuristic, meta heuristic, greedy and approximation approaches have been proposed in the recent years. In the development of swarm intelligence, the particle swarm optimization is a nature inspired optimization technique for continuous problems and for discrete problems we have the well known discrete particle swarm optimization (DPSO) method. Aiming towards the best solution for discrete problems, we have the recent method called jumping particle swarm optimization (JPSO). In this DPSO the improved solution is based on the particles attraction caused by attractor. In this paper, a new approach based on JPSO is proposed to solve the SCP. The proposed approach works in three phases: for selecting attractor, refining the feasible solution given by the attractor in order to reach the optimality and for removing redundancy in the solution. The proposed approach has been tested on the benchmark instances of SCP and compared with best known methods. Computational results show that it produces high quality solution in very short running times when compared to other algorithms.
Data analysis is an important functionality in cloud computing which allows a huge amount of data to be processed over very large clusters. Hadoop is a software framework for large data analysis. It provide a Hadoop distributed file system for the analysis and transformation of very large data sets is performed using the MapReduce paradigm. MapReduce is known as a popular way to hold data in the cloud environment due to its excellent scalability and good fault tolerance. Map Reduce is a programming model widely used for processing large data sets. Hadoop Distributed File System is designed to stream those data sets. The Hadoop MapReduce system was often unfair in its allocation and a dramatic improvement is achieved through the Mapper Reducer System. The proposed Mapper Reducer function using the mean shift clustering based algorithm allows us to analyze the data set and achieve better performance in executing the job by using optimal configuration of mappers and reducers based on the size of the data sets and also helps the users to view the status of the job and to find the error localization of scheduled jobs. This will efficiently utilize the performance tuning properties of optimized scheduled jobs. So, the efficiency of the system will result in substantially lowered system cost, energy usage, management complexity and increases the performance of the system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.