Disk mirroring or RAID level 1 stores the same data twice, on two independent disks, to ensure that all single disk failures can be tolerated. This high storage overhead is acceptable in view of the drop in storage cost per gigabyte and rapidly increasing disk capacities. Disk access time, on the other hand, is improving at a very slow pace, so that another important advantage of disk mirroring is the doubling of the disk access bandwidth in processing read requests. Efficient routing of read requests to disks and local disk scheduling can be used to improve performance even further. We are primarily concerned with two RAID1 configurations: (i) source-initiated routing with the independent queues-IQ method; (ii) destination-initiated routing with the shared queue-SQ method. Static, dynamic, and affinity-based (AB) routing methods are used to distribute requests with the IQ method. We compare the performance of various IQ and SQ based routing policies using a random number-driven simulation. While there is some improvement in performance with the more sophisticated routing policies, performance is dominated by the local disk scheduling policy. The SQ method allows resource sharing, so that it tends to outperform the IQ based routing, but it requires the scheduler to keep track of the state of the disk drives. As a further means to improve performance, we consider the effect of prioritizing reads with respect to writes, transposed data allocation, and replicating data more than twice.
Mirrored disk schedulingMajor advances in magnetic disk technology have resulted in very high areal recording densities. There has been a three order of magnitude increase in disk capacities in the last decade and a dramatic drop in the per gigabyte cost. Due to the increased linear recording density, the transfer time of small blocks of data accessed by online transaction processing-OLTP applications constitutes a small fraction of disk rotation time, so it is negligibly small compared to disk positioning time, i.e., the sum of seek time and rotational latency. The increase in disk RPMs also contributes to this effect, but more importantly to reducing the rotational latency, which for small block transfers is roughly one half of disk rotation time. The improvement in seek time is also very slow due to its mechanical nature. The overall reduction in disk positioning time is below 8% annually [5].The fact that the improvement in disk access time lags far behind the increase in disk capacities is especially important for OLTP applications, which are concerned with random accesses to small data blocks. The performance of certain benchmarks specified by the Transaction Processing Council 1 is determined by the maximum throughput, while transaction response time is below a certain threshold. Transaction response time is affected by the number of disk accesses, which are carried out on its behalf. The database buffer in main memory is effective in eliminating a 1 http://www.tpc.org.Springer