2011
DOI: 10.1016/j.comcom.2010.06.024
|View full text |Cite
|
Sign up to set email alerts
|

Separating computation and storage with storage virtualization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…1 Therefore, in such a system, the data requests become two-stage jobs, consisting of the disk-read and the network-transmission operations, while each server becomes a two-stage flowshop, consisting of the disk-read and networktransmission processors (note that the disk-read and network-transmission can run in parallel in the same server), and scheduling a given set of such requests in a multiple-server center becomes an instance of the scheduling model we have formulated. We should remark that the time for disk-read and the time for network-transmission in a typical server are in general comparable, and, due to the impact of cache systems, they need not to have a linear relation [22]. Therefore, neither can be simply ignored if we want to maintain good performance for the cloud system.…”
Section: Motivationsmentioning
confidence: 99%
See 1 more Smart Citation
“…1 Therefore, in such a system, the data requests become two-stage jobs, consisting of the disk-read and the network-transmission operations, while each server becomes a two-stage flowshop, consisting of the disk-read and networktransmission processors (note that the disk-read and network-transmission can run in parallel in the same server), and scheduling a given set of such requests in a multiple-server center becomes an instance of the scheduling model we have formulated. We should remark that the time for disk-read and the time for network-transmission in a typical server are in general comparable, and, due to the impact of cache systems, they need not to have a linear relation [22]. Therefore, neither can be simply ignored if we want to maintain good performance for the cloud system.…”
Section: Motivationsmentioning
confidence: 99%
“…Consider the situation in data centers as we described in Section 1. In order to improve the process of data-read/networktransformation, severs in the center may keep certain commonly used software codes in the main memory so that the time-consuming process of data-read can be avoided (see, for example, [22]). Thus, client requests for the code will become two-stage jobs J i = (r i , t i ) with r i = 0.…”
Section: Dual Jobs and Dual Schedulesmentioning
confidence: 99%
“…Transparent Computing paradigm [7][8][9] aims at the vision advocated by pervasive computing in which users can demand computing services in a hassle-free way. Fig.…”
Section: Preliminariesmentioning
confidence: 99%
“…The TCP takes charge of the management and allocation of OS resources, application resources and users' private resources, by providing data storage and access services for the terminal users through the transparent network. All the requests that come from terminals are redirected to the Vdisks in the TCP [21].…”
Section: Preliminarymentioning
confidence: 99%