VESPA: VIPT Enhancements for Superpage Accesses

01/12/2017
by   Mayank Parasar, et al.
0

L1 caches are critical to the performance of modern computer systems. Their design involves a delicate balance between fast lookups, high hit rates, low access energy, and simplicity of implementation. Unfortunately, constraints imposed by virtual memory make it difficult to satisfy all these attributes today. Specifically, the modern staple of supporting virtual-indexing and physical-tagging (VIPT) for parallel TLB-L1 lookups means that L1 caches are usually grown with greater associativity rather than sets. This compromises performance -- by degrading access times without significantly boosting hit rates -- and increases access energy. We propose VIPT Enhancements for SuperPage Accesses or VESPA in response. VESPA side-steps the traditional problems of VIPT by leveraging the increasing ubiquity of superpages; since superpages have more page offset bits, they can accommodate L1 cache organizations with more sets than baseline pages can. VESPA dynamically adapts to any OS distribution of page sizes to operate L1 caches with good access times, hit rates, and energy, for both single- and multi-threaded workloads. Since the hardware changes are modest, and there are no OS or application changes, VESPA is readily-implementable. By superpages (also called huge or large pages) we refer to any page sizes supported by the architecture bigger than baseline page size.

READ FULL TEXT

page 1

page 5

page 10

page 11

research
01/25/2017

Hardware Translation Coherence for Virtualized Systems

To improve system performance, modern operating systems (OSes) often und...
research
11/24/2020

Leveraging Architectural Support of Three Page Sizes with Trident

Large pages are commonly deployed to reduce address translation overhead...
research
04/30/2018

Mosaic: An Application-Transparent Hardware-Software Cooperative Memory Manager for GPUs

Modern GPUs face a trade-off on how the page size used for memory manage...
research
07/20/2023

FHPM: Fine-grained Huge Page Management For Virtualization

As more data-intensive tasks with large footprints are deployed in virtu...
research
03/06/2020

Bandwidth-Aware Page Placement in NUMA

Page placement is a critical problem for memoryintensive applications ru...
research
04/05/2020

Change Rate Estimation and Optimal Freshness in Web Page Crawling

For providing quick and accurate results, a search engine maintains a lo...
research
12/01/2016

Near-Memory Address Translation

Memory and logic integration on the same chip is becoming increasingly c...

Please sign up or login with your details

Forgot password? Click here to reset