2015
DOI: 10.1109/tmc.2015.2404791
|View full text |Cite
|
Sign up to set email alerts
|

Virtual Servers Co-Migration for Mobile Accesses: Online versus Off-Line

Abstract: In this paper, we study the problem of co-migrating a set of service replicas residing on one or more redundant virtual servers in clouds in order to satisfy a sequence of mobile batch-request demands in a cost effective way. With such a migration, we can not only reduce the service access latency for end users but also minimize the network costs for service providers. The co-migration can be achieved at the cost of bulk-data transfer and increases the overall monetary costs for the service providers. To gain … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…where t j,k denotes the time for processing results to be sent from the source node j to the destination node k. The overall service latency for marker 2 in Fig. 2 can be described in (8).…”
Section: Overall Time Delaymentioning
confidence: 99%
See 1 more Smart Citation
“…where t j,k denotes the time for processing results to be sent from the source node j to the destination node k. The overall service latency for marker 2 in Fig. 2 can be described in (8).…”
Section: Overall Time Delaymentioning
confidence: 99%
“…In practice, IoT-related mobile devices usually depend on the Base Station (BS) that works with the edge computing node to obtain services. However, a study [7] by the European Telecommunications Standards Institute (ETSI) has found that the coverage of BS is often limited, which leads to increased service delay when the device leaves the corresponding coverage area [8]. Additionally, in order to provide different services, multiple virtual machines running on the same edge computing node may cause I/O interference, and the corresponding service latency also increases.…”
Section: Introductionmentioning
confidence: 99%
“…It enables various OSes to share physical resources. These OSes come as Virtual machines (VMs) [15] , and the hypervisor acts as a Virtual machine monitor (VMM) to manage these VMs and allocate hardware, memory, CPU, and disk to each VM. And the container technique is a kind of kernel lightweight OS layer virtualization technique that runs on the physical host OS.…”
Section: Virtualizationmentioning
confidence: 99%
“…Based on the identical target function, if they did not know any knowledge about the popularities of contents, Reference 22 presented an online algorithm with the competitive ratio of O(logn) for the collaborative caching model in multiple cluster collaborative systems. Considering the idea of comigration, Wang et al presented a random competition algorithm which is a parallel dynamic programming algorithm based on the combination of branch and bound strategy and sampling technique 23 . Wang et al studied a fresh homogeneous cache model, that is, the transmission cost of the content between servers and caching cost of every edge node are fixed, respectively 15 .…”
Section: Introductionmentioning
confidence: 99%
“…Considering the idea of comigration, Wang et al presented a random competition algorithm which is a parallel dynamic programming algorithm based on the combination of branch and bound strategy and sampling technique. 23 Wang et al studied a fresh homogeneous cache model, that is, the transmission cost of the content between servers and caching cost of every edge node are fixed, respectively. 15 In their paper, they presented a O(mn) polynomial time algorithm upon dynamic programming to minimize total cost, where m and n are the number of nodes and the number of requests, respectively.…”
mentioning
confidence: 99%