Call Admission Control (CAC) protocols play a very important role in the performance of wireless networks. In this paper, we present a call admission control protocol for cellular wireless networks. Our protocol depends on degrading the existing calls by reducing the bandwidth allocated to them in order to admit "important" calls. Our protocol assign priorities for the incoming calls, and in the same time assign priorities to the existing calls, both admitted calls are admitted according to their priorities and the existing calls are degraded according to their priorities. We show simulation results for the relation between network utilization, call-blocking probability, and average assigned bandwidth during the life of the call.
The cache memory plays a crucial role in the performance of any processor. The cache memory (SRAM), especially the on chip cache, is 3-4 times faster than the main memory (DRAM). It can vastly improve the processor performance and speed. Also the cache consumes much less energy than the main memory. That leads to a huge power saving which is very important for embedded applications. In today's processors, although the cache memory reduces the energy consumption of the processor, however the energy consumption in the on-chip cache account to almost 40% of the total energy consumption of the processor. In this paper, we propose a cache architecture, for the instruction cache, that is a modification of the hotspot architecture. Our proposed architecture consists of a small filter cache in parallel with the hotspot cache, between the L1 cache and the main memory. The small filter cache is to hold the code that was not captured by the hotspot cache. We also propose a prediction mechanism to steer the memory access to either the hotspot cache, the filter cache, or the L1 cache. Our design has both a faster access time and less energy consumption compared to both the filter cache and the hotspot cache architectures. We use Mibench and Mediabench benchmarks, together with the simplescalar simulator in order to evaluate the performance of our proposed architecture and compares it with the filter cache and the hotspot cache architectures. The simulation results show that our design outperforms both the filter cache and the hotspot cache in both the average memory access time and the energy consumption.
Two of the most important factors in the design of any processor are speed and energy consumption. In this paper, we propose a new cache architecture that results in a faster memory access and lower energy consumption. Our proposed architecture does not require any changes to the processor architecture, it only assume the existence of a BTB. Using Mediabench, a benchmark used for embedded applications, Simplescalar simulator, and CACTI power simulator,we show that our proposed architecture consumes less energy, and have better memory access time, than many existing cache architectures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.