Mobile edge computing (MEC) has emerged as a pivotal technology to address the computational demands of resource-constrained mobile devices by offloading tasks to nearby edge servers. However, ensuring the security and efficiency of computation offloading in multiaccess MEC networks remains a critical challenge. This paper proposes a novel approach that leverages deep reinforcement learning (DRL) for secure computation offloading in multi-access MEC networks. The proposed framework utilizes DRL agents to dynamically make offloading decisions based on the current network conditions, resource availability, and security requirements. The agents learn optimal offloading policies through interactions with the environment, aiming to maximize task completion efficiency while minimizing security risks. To enhance security, the framework integrates encryption techniques and access control mechanisms to protect sensitive data during offloading. The proposed approach undergoes comprehensive simulations to assess its performance in terms of security, efficiency, and scalability. The results demonstrate that the DRL-based approach effectively balances the tradeoffs between security and efficiency, achieving robust and adaptive computation offloading in multi-access MEC networks. This study contributes to advancing the state-of-the-art in secure and efficient mobile edge computing systems, fostering the development of intelligent and resilient MEC solutions for future mobile networks.