Task offloading in mobile edge computing (MEC) improves the efficacy of mobile devices in terms of computing performance, data storage, and energy consumption by offloading computational tasks to edge servers. Efficient task offloading can leverage MEC technology to reduce task processing latency and energy consumption. By integrating the reasoning ability and machine intelligence of the cognitive computing architecture, such as SOAR and ACT-R, reinforcement learning (RL) algorithms have been applied to resolve the task offloading in MEC. To solve the problem that conventional Deep RL (DRL) algorithms cannot adapt to dynamic environments, this paper proposed a task offloading scheduling strategy which combined multiagent reinforcement learning and meta-learning. In order to make the two actions of charging time and offloading strategy fully considered at the same time, we implemented a learning network of two agent on a mobile device. To efficiently train the policy network, we proposed a first order approximation method based on clipped surrogate objective. Finally, the experiments are designed with variety of the number of subtasks, transmission rate, and edge server performance, and the results show that the MRL-based strategy has the overwhelming overall performance and can be quickly applied in various environments with good stability and generalization.