Internet of Things (IoT) applications have gained widespread adoption across diverse sectors such as healthcare, smart agriculture, smart cities, transportation, and water management, leading to a substantial generation of Big Data. To efficiently process this voluminous data, there is a pressing need for a platform capable of handling large quantities. However, real-time applications face challenges in cloud processing due to high latency. Fog computing, serving as a complementary infrastructure to the cloud, emerges as a viable solution by facilitating task processing, networking, and data storage in cloud data centers accessible to mobile users.
Task offloading emerges as a promising fog computing solution to overcome resource constraints in IoT applications. This involves the execution of part or all of mobile applications in the cloud, aiming to enhance execution time and reduce energy consumption. Our research concentrates on optimizing the IoT task offloading problem within heterogeneous environments, considering conflicting constraints. This optimization challenge is formulated as a multi-objective problem, emphasizing energy consumption and latency Quality of Service (QoS) metrics. Our proposed solution, named Tof-NSGAII, is tailored to respect the finite resources of fog computing by adeptly balancing workloads and meeting the latency requirements of IoT tasks.
We have adapted the widely employed meta-heuristic, the non-dominant sorting genetic algorithm (NSGA-II), to generate a set of non-dominated multi-objective task offloading optimization solutions, considering both energy consumption and latency. Experimental results showcase the efficacy of Tof-NSGAII in generating task offloading solutions that judiciously distribute executed tasks between fog and cloud computing environments based on their specific requirements. Additionally, the generated non-dominated solutions demonstrate optimality in terms of energy consumption, boasting an average energy reduction of 12.18\% compared to alternative approaches. Notably, our approach introduces only a marginal increase in latency, amounting to 0.38\%, a difference that can be considered negligible.