2011 IEEE Sixth International Conference on Networking, Architecture, and Storage 2011
DOI: 10.1109/nas.2011.50
|View full text |Cite
|
Sign up to set email alerts
|

Azor: Using Two-Level Block Selection to Improve SSD-Based I/O Caches

Abstract: Abstract-Flash-based solid state drives (SSDs) exhibit potential for solving I/O bottlenecks by offering superior performance over hard disks for several workloads. In this work we design Azor, an SSD-based I/O cache that operates at the block-level and is transparent to existing applications, such as databases. Our design provides various choices for associativity, write policies and cache line size, while maintaining a high degree of I/O concurrency. Our main contribution is that we explore differentiation o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 14 publications
0
17
0
Order By: Relevance
“…A further degree of control is the capability to provide per-workload instances of the filesystem journal, thus providing each workload with a separate access path to the underlying storage devices. In this paper, we evaluate the effectiveness of this approach for a variety of workloads running on a virtualization host that has two types of devices, solid-state devices (SSDs) and hard disks, arranged so that the SSDs serve as a transparent cache (pFlash) for the underlying hard disks( [12]). With Vanguard we achieve three prerequisites for performance isolation: (1) enforce per-workload limits on the amount of I/O memory and SSD cache, (2) reduce contention across workloads due to synchronization and global policies in the hypervisor, and (3) allow I/O memory and threads to be placed with improved thread/data affinity, on servers which exhibit more pronounced NUMA and contention effects.…”
Section: I/o Stackmentioning
confidence: 99%
See 1 more Smart Citation
“…A further degree of control is the capability to provide per-workload instances of the filesystem journal, thus providing each workload with a separate access path to the underlying storage devices. In this paper, we evaluate the effectiveness of this approach for a variety of workloads running on a virtualization host that has two types of devices, solid-state devices (SSDs) and hard disks, arranged so that the SSDs serve as a transparent cache (pFlash) for the underlying hard disks( [12]). With Vanguard we achieve three prerequisites for performance isolation: (1) enforce per-workload limits on the amount of I/O memory and SSD cache, (2) reduce contention across workloads due to synchronization and global policies in the hypervisor, and (3) allow I/O memory and threads to be placed with improved thread/data affinity, on servers which exhibit more pronounced NUMA and contention effects.…”
Section: I/o Stackmentioning
confidence: 99%
“…Completing our I/O-Stack we include pFlash, a blocklevel write-back SSD cache, derived from our own previous work [12]. pFlash is a transparent cache, as it exports a block device with size equal to the size of the device being cached.…”
Section: Ssd Cachementioning
confidence: 99%
“…In [12] is examined how SSDs can be used as a large cache on top of RAID to conserve energy. More recently, Klonatos et al [13] designed the Azor system that uses SSDs as caches in the I/O path to transparently and dynamically place data blocks in the SSDs. In all cases SSDs demonstrate potential for improved performance.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, SSD devices became a faster option to disks. As SSD is much more expensive than common disks but cheaper than RAM, it is sometimes used as a cache layer [1] for storage.…”
Section: Introductionmentioning
confidence: 99%
“…1). Inclusion of SSDs in the I/O caching layer of systems improves the response time of the requests supplied by the cache, and hence, a wide range of enterprise and academic I/O cache architectures are proposed with the purpose of maximizing the hit ratio of the caching layer [10], [11], [12], [2], [13], [3], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28]. In these I/O caching schemes, mainly based on datapath or push mode cache architectures, the entire accesses are directed to the caching layer [29] and as such, the highest number of requests is responded via the caching layer to achieve the highest performance in terms of hit ratio.…”
Section: Introductionmentioning
confidence: 99%