Virtual network embedding (VNE) is a promising technique enabling 5G networks to satisfy the given requirements of each service via network virtualization. For better performance of the embedding algorithm, it is necessary to automatically detect the network status and provide an optimal embedding decision. However, existing VNE algorithms disregard the longterm effect by focusing on selecting only one virtual network request from the waiting queue, without considering all waiting virtual network requests concurrently. In this study, we propose a hierarchical cooperative multi-agent reinforcement learning algorithm to optimize the VNE problem by maximizing average revenue, minimizing average cost, and also improving the request acceptance ratio. The proposed algorithm applies two reinforcement learning algorithms: 1) two-level hierarchical reinforcement learning to efficiently solve the problem by dividing it into subproblems, and 2) multi-agent-based cooperative reinforcement learning to improve algorithm performance through the cooperation of multiple agents. In order to evaluate and analyze the proposed scheme from the long-term perspective, four performance parameters are evaluated: revenue, cost, revenue-to-cost ratio, and acceptance ratio. The simulation results demonstrate that the proposed VNE algorithm based on hierarchical and multiagent reinforcement learning outperforms the existing RL-based approaches.