dc.contributor.advisor | Jiménez, Daniel A | |
dc.creator | Coman, James | |
dc.date.accessioned | 2022-01-24T22:19:57Z | |
dc.date.available | 2022-01-24T22:19:57Z | |
dc.date.created | 2021-08 | |
dc.date.issued | 2021-07-26 | |
dc.date.submitted | August 2021 | |
dc.identifier.uri | https://hdl.handle.net/1969.1/195140 | |
dc.description.abstract | As the memory footprints of modern compute workloads continue to grow[1], pressure on the
memory hierarchy increases and address translations play an increasingly important role in system
performance. Translation Lookaside Buffers (TLB) are a vital structure to the performance of
modern virtual memory systems. They reduce the need for slow and expensive page walks by
caching the most recent virtual-to-physical address translations. We analyze how well the cost
of the page walk can be approximated in a five level memory hierarchy, and how simple and
hypothetical optimizations are able to affect the memory system performance.
Initially we compare the performance of a realistic page walker to a fixed page walk penalty.
This allows for future work to presume a demonstrably reasonable constant value in experimenta-
tion, not relying on intuition and saving on the additional time and energy of a simulated page walk.
A suggested fixed value is put forward as well as an analysis of the variability across workloads
and any limitations.
Making use of this fixed page walk penalty, we also look at the effect of a simple TLB op-
timization - doubling the available resources. allows us to asses the affect of the TLB on the
memory system performance and discuss both what a future optimization may look like and what
performance can be both reasonably expected and hoped for.
We analyze one potential in-TLB optimization, CHiRP[2], which seeks a replacement policy
for the TLB more appropriate and optimized for the structure than least-recently-used (LRU). We
analyze the structure of the policy and also the results of the CHiRP work against our hypothetical
performance improvements. A strategy related to prefetching is also analyzed. ASAP[3] which
prefetches inside of and relevant only to a particular page walk is examined. | en |
dc.format.mimetype | application/pdf | |
dc.language.iso | en | |
dc.subject | TLB | en |
dc.subject | cache | en |
dc.subject | cache management | en |
dc.title | Simulation of Address Translation Techniques | en |
dc.type | Thesis | en |
thesis.degree.department | Computer Science and Engineering | en |
thesis.degree.discipline | Computer Engineering | en |
thesis.degree.grantor | Texas A&M University | en |
thesis.degree.name | Master of Science | en |
thesis.degree.level | Masters | en |
dc.contributor.committeeMember | Gratz, Paul | |
dc.contributor.committeeMember | da Silva, Dilma | |
dc.type.material | text | en |
dc.date.updated | 2022-01-24T22:19:57Z | |
local.etdauthor.orcid | 0000-0002-3744-1710 | |