Hadoop MapReduce is one of the largely used platforms for large scale data processing. Hadoop cluster has machines with different resources, including memory size, CPU capability and disk space. This introduces challenging research issue of improving Hadoop's performance through proper resource provisioning. The work presented in this paper focuses on optimizing job scheduling in Hadoop. Workload Characteristic and Resource Aware (WCRA) Hadoop scheduler is proposed, that classifies the jobs into CPU bound and Disk I/O bound. Based on the performance, nodes in the cluster are classified as CPU busy and Disk I/O busy. The amount of primary memory available in the node is ensured to be more than 25% before scheduling the job. Performance parameters of Map tasks such as the time required for parsing the data, map, sort and merge the result, and of Reduce task, such as the time to merge, parse and reduce is considered to categorize the job as CPU bound or Disk I/O bound. Tasks are assigned the priority based on their minimum Estimated Completion Time. The jobs are scheduled on a compute node in such a way that jobs already running on it will not be affected. Experimental results has given 30 % improvement in performance compared to Hadoop's FIFO, Fair and Capacity scheduler.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.