“…This enables the processor to sustain a higher instruction rate, which improves both performance and energy efficiency. [38], [55], [63]- [65], memory storage for prediction of cache access result [11], [14], [15], [49], [66] for pre-determination of cache access result [18], [28], [50], [67]- [69] Reducing number using software [17], of ways consulted compiler [40], [57] , in each access hardware [12], [28], [47], [49], [50], [67], [69], [70] Reducing switching Sequential cache-way access [14], [37], [49], [54], activity multi-step tag-bit matching [71], reducing active tag bits or those actually compared [22], [34], [72]- [74] accessing frequent (hot) data with lower energy [21], [46] ESTs for multicores [57], [66], [75] or multiprocessors Ghosh et al [67] propose a technique named 'Way Guard' to save dynamic energy in caches. This technique uses a segmented counting Bloom filter [77] with each cache way.…”