Cloud computing including mobile IoT devices infrastructure have grown to be a key component of future high-performance computing networks because they can supply distributed, hierarchical, and fine-grained resources. The secret is to combine the optimization of compute offloading and service caching. However, dynamic tasks, heterogeneous resources, and coupled decisions pose three major obstacles to the joint service caching and compute offloading dilemma. In this paper, we study how Dew-assisted mobile IoT devices and Fog-Cloud computing networks can benefit from the simultaneous processing for caching of various services and computational offloading. We formulate the optimization problem specifically as minimizing the NP-hard long-term average service delay. We provide thorough theoretical analysis and divide the topic into two smaller issues, i.e., processing for computation offloading and service caching. To answer the defined problem, where several Dew-assisted mobile IoT devices and a Cloud VM jointly determine the caching-action and offloading-action, respectively, we developed a unique Distributed-Deep Reinforcement Learning (DDRL) technique. The suggested framework beats multiple existing techniques regarding the average service delay across several scenarios, according to the findings of trace-driven simulations. When compared to methods based on reinforcement learning, our framework delivers an impressive 39% reduction in the average service delay and an approximate 37% improvement in convergence based on a practical real-world environment.