Various edge collaboration schemes that rely on reinforcement learning (RL) have been proposed to improve the quality of experience (QoE). Deep RL (DRL) maximizes cumulative rewards through large-scale exploration and exploitation. However, the existing DRL schemes do not consider the temporal states using a fully connected layer. Moreover, they learn the offloading policy regardless of the importance of experience. They also do not learn enough because of their limited experiences in distributed environments. To solve these problems, we proposed a distributed DRL-based computation offloading scheme for improving the QoE in edge computing environments. The proposed scheme selects the offloading target by modeling the task service time and load balance. We implemented three methods to improve the learning performance. Firstly, the DRL scheme used the least absolute shrinkage and selection operator (LASSO) regression and attention layer to consider the temporal states. Secondly, we learned the optimal policy based on the importance of experience using the TD error and loss of the critic network. Finally, we adaptively shared the experience between agents, based on the strategy gradient, to solve the data sparsity problem. The simulation results showed that the proposed scheme achieved lower variation and higher rewards than the existing schemes.