The use of cloud services is in high demand due to their high storage and computing capacity. Apache spark provides an open deployment framework for data storage and computation using cluster computing. The specified spark core scheduler uses FIFO to manage job execution in batches. However, it may not be suitable for large-scale clusters due to the unevenness in managing resource allocation between different types of applications. Because of this, most of the work executors are still underutilized and resources are wasted in all the pods, leading to cost inefficiencies. Using apache spark on kubernetes to run cloud applications will ensure rapid resource management for workload execution. Incoming workloads vary widely across applications, so it is critical to manage workload allocation to ensure QoS and cost efficiency. This paper proposes a job scheduling mechanism (JSM) for apache spark on kubernetes to dynamically schedule job allocation for the efficient execution of various big data applications. The JSM process predicts the cluster load and suggests relocation of workloads to efficiently distribute the workload to the lower loaded pods in a standard cluster to optimize cost performance. It identifies the upcoming workload of a job and determines the best-fit pod and aims to reduce the usage of CPU and memory which result in enhancing cost efficiency. The JSM's effective management of job allocation and migration among the underload pods preserves the resource and enhances cost efficiency. Experimental settings are configured to evaluate cluster resources with benchmark statistics output for application job execution. The outcome results of cost, job performance, and scheduling overhead show improved cost efficiency for job execution. The comparison with the existing scheduler with varying request load shows an improvisation of 2% in cost efficiency and 3% lower scheduling overhead.