This paper proposes a code compression technique called operand factorization. The central idea of operand factorization is the separation of program expression trees into sequences of tree-patterns (opcodes) and operandpatterns (registers and immediates).Using this technique, we show that tree and operand patterns have exponential frequency distributions. A set of experiments were designed to explore this feature. They reveal an average compression ratio of 43% for SPECInt95 programs. A decompression engine is proposed, which assembles tree and operand patterns into uncompressed instruction sequences. An encoding that improves the design of the decompression engine results in a 48% compression ratio. Compression ratio numbers take into consideration an estimate of the decompression engine size.
Code compression has been shown to be an effective technique to reduce code size in memory constrained embedded systems. It has also been used as a way to increase cache hit ratio, thus reducing power consumption and improving performance. This paper proposes an approach to mix static/dynamic instruction profiling in dictionary construction, so as to best exploit trade-offs in compression ratio/performance. Compressed instructions are stored as variable-size indices into fixed-size codewords, eliminating compressed code misalignments. Experimental results, using the Leon (SPARCv8) processor and a program mix from MiBench and Mediabench, show that our approach halves the number of cache accesses and power consumption while produces compression ratios as low as 56%.
Reducing program size has become an important goal in the design of modern embedded systems targeted to mass production. This problem has driven efforts aimed at designing processors with shorter instruction formats (e.g., ARM Thumb and MIPS16) or able to execute compressed code (e.g., IBM PowerPC 405). This paper proposes three code compression algorithms for embedded RISC architectures. In all algorithms, the encoded symbols are extracted from program expression trees. The algorithms differ on the granularity of the encoded symbol, which are selected from whole trees, parts of trees, or single instructions. Dictionary-based decompression engines are proposed for each compression algorithm. Experimental results, based on SPEC CINT95 programs running on the MIPS R4000 processor, reveal an average compression ratio of 53.6% (31.5%) if the area of the decompression engine is (not) considered.Index Terms-Code compression, RISC architecture.
Electronic system level (ESL) modeling allows early hardware-dependent software (HDS) development. Due to broad CPU diversity and shrinking time-to-market, HDS development can neither rely on hand-retargeting binary tools, nor can it rely on pre-existent tools within standard packages. As a consequence, binary utilities which can be easily adapted to new CPU targets are of increasing interest. We present in this article a framework for automatic generation of binary utilities. It relies on two innovative ideas: platform-aware modeling and more inclusive relocation handling. Generated assemblers, linkers, disassemblers and debuggers were validated for MIPS, SPARC, PowerPC, i8051 and PIC16F84. An open-source prototype generator is available for download.
Code compression, initially conceived as an effective technique to reduce code size in embedded systems, today also brings advantages in terms of performance and energy consumption, due to an increase in the cache hit ratio. This paper proposes the design of a code decompressor engine for our dictionary-based method, embedding it into the Leon (SPARC V8) processor. Our design guarantees that the processor cycle-time is maintained and the decompression is performed on-the-fly. We have achieved a functional implementation on a FPGA, with compression ratios ranging from 72% to 88%, performance improvement as high as 45% and a reduction on energy consumption reaching 35%, validated through two real-world benchmarks suites: MediaBench and MiBench. We also explore some trade-offs between compression ratio and performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.