Edge computing extends computing resources from the data center to the edge of the network to better handle latency‐sensitive tasks. However, with the rise of the Internet of Things, edge devices with limited processing capabilities face difficulties in executing requests with fluctuating request peaks. In order to meet the deadline constraints of latency‐sensitive tasks, a feasible solution is to offload some latency‐sensitive tasks to other nearby edge devices. This article studies the problem of request migration in edge computing systems and minimizes the request deadline violation rate based on actual online arrival patterns, performance interference phenomena, and deadline constraints. Since a request contains multiple services and request migration will lead to changes in server resource competition pressure, we split the problem into three sub‐problems, dividing the request deadline to determine the maximum response time of the service, determining the performance of the service under different resource pressures and the request migration strategies. To this end, we propose two deadline splitting methods, a performance interference model under multi‐resource pressure, and two heuristic request migration strategies. Since this article considers online edge scenarios, the number and type of requests are black boxes. We conduct simulation experiments and find that our method has only one‐third the number of request violations of other methods.