As big data has evolved over the past few years, a lack of storage space and I/O bandwidth has become one of the most important challenges to overcome. To mitigate these problems, data compression schemes reduce the amount of data to be stored and transmitted at the cost of additional CPU overhead. Many researchers have attempted to reduce the computational load imposed on the CPU by data compression using specialized hardware. However, space savings through data compression often comes from only a small portion of data. Therefore, compressing all data, regardless of data compressibility, can waste computational resources. Our work aims to decrease the cost of data compression by introducing a selective data compression scheme based on data compressibility prediction. The proposed compressibility prediction method provides more fine-grained selectivity for combinational compression. Additionally, our method reduces the amount of resources consumed by the compressibility predictor, enabling selective compression at a low cost. To verify the proposed scheme, we implemented a DEFLATE compression system on a field-programmable gate array platform. Experimental results demonstrate that the proposed scheme improves compression throughput by 34.15% with a negligible decrease in compression ratio. INDEX TERMS Data compression, Huffman coding, LZ77 encoding, accelerator architecture, field programmable gate array, estimation, compressibility.
Many flash storage systems divide input/output (I/O) requests that require large amounts of data into sub-requests to exploit their internal parallelism. In this case, an I/O request can be completed only after all sub-requests have been completed. Thus, non-critical sub-requests that are completed quickly do not affect I/O latency. To efficiently reduce I/O latency, we propose a buffer management scheme that allocates buffer space by considering the relationship between the processing time of the sub-request and I/O latency. The proposed scheme prevents non-critical sub-requests from wasting ready-to-use buffer space by avoiding the situation in which buffer spaces that are and are not ready to use are allocated to an I/O request. To allocate the same type of buffer space to an I/O request, the proposed scheme first groups sub-requests derived from the same I/O request and then applies a policy for allocating buffer space in units of subrequest groups. When the ready-to-use buffer space is insufficient to be allocated to the sub-request group being processed at a given time, the proposed scheme does not allocate it to the sub-request group but it instead sets it aside for future I/O requests. The results of the experiments to test the proposed scheme show that it can reduce I/O latency by up to 24% compared with prevalent buffer management schemes.
Physically-addressable solid-state drives (PASSDs) are secondary storage devices that provide a physical address-based interface for a host system to directly control NAND flash memory. PASSDs overcome the shortcomings such as latency variability, resource under-utilization, and log-on-log that are associated with legacy SSDs. However, in some operating environments, the write response time significantly increases because the PASSD reports the completion of a host write command synchronously (i.e., write-through) owing to reliability problems. It contrasts asynchronous processing (i.e., write-back), which reports a completion immediately after data are received in a high-performance volatile memory subsequently used as a write buffer to conceal the operation time of NAND flash memory. Herein, we propose a new scheme that guarantees write reliability to enable a reliable asynchronous write operation in PASSD. It is designed to use a large-granularity mapping table for minimizing the memory requirements and performing internal operations at an idle time to avoid response delays. Results demonstrate that the proposed PASSD reduces the average write response time by up to 88% and guarantees reliability without performance degradation.
Many NAND flash storage systems access flash memories by generating flash commands to process input/output (I/O) requests from a host system. Because the order in which flash commands are processed affects the I/O performance, command scheduling has been performed in previous studies to prioritize flash read commands for improving the read performance. However, in addition to the flash commands for accessing user data requested by the host, flash commands for internal tasks that improve the I/O performance and operation efficiency of the flash storage are issued. Particularly in embedded flash storage, the flash commands by the map cache management have a significant influence on the latency of I/O requests. The processing time of the I/O request depends on the execution time of the flash commands for mapping information and the flash commands for user data. In this paper, we propose command scheduling to improve read performance to address the occurrence of map flash commands. Priority is given to the flash read and program command from the read request, and among these commands, the flash command originating from the read request, which has a shorter processing time, is executed first. Consequently, the waiting time of the flash command is reduced, thereby improving the read latency. Experiments conducted with real workloads show that the proposed scheduling scheme reduces the average read latency by up to 51% compared with the existing scheduling scheme and demonstrates an effective performance improvement in a small map cache.INDEX TERMS Command scheduling, map cache, address translation, flash translation layer, NAND flash storage, NAND flash memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.