dc.description.abstract | Cache memory is a bridging component which covers the increasing gap between the speed of
a processor and main memory. An excellent performance of the cache is crucial to improve system
performance. Conflict misses are one of the critical reasons that limit the cache performance by
mapping blocks to the same set which results in the eviction of many blocks. However, many
blocks in the cache sets are not mapped, and thus the available space is not efficiently utilized. A
direct way to reduce conflict misses is to increase associativity, but this comes with the cost of an
increase in the hit time. Another way to reduce conflict misses is to change the cache-indexing
scheme and distribute the accesses across all sets.
This thesis focuses on the second way mentioned above and aims to evaluate the impact of the
matrix-based indexing scheme on cache performance against the traditional modulus-based indexing
scheme. A correlation between the proposed indexing scheme and different cache replacement
policies is also observed.
The matrix-based indexing scheme yields a geometric mean speedup of 1.2% for SPEC CPU
2017 benchmarks for single core simulations when applied for direct-mapped last level cache. In
this case, an improvement of 1.5% and 4% is observed for at least eighteen and seven of SPEC
CPU2017 applications respectively. Also, it yields 2% of performance improvement over sixteen
SPEC CPU2006 benchmarks. The new indexing scheme correlates well with multiperspective
reuse prediction. It is observed that LRU benefits machine learning benchmark by a performance
of 5.1%. For multicore simulations, the new indexing scheme does not improve performance
significantly. However, this scheme also does not impact the application’s performance negatively. | en |