Fast Bitmap Fit: A CPU Cache Line friendly memory allocator for single object allocations

10/20/2021
by   Dhruv Matani, et al.
0

Applications making excessive use of single-object based data structures (such as linked lists, trees, etc...) can see a drop in efficiency over a period of time due to the randomization of nodes in memory. This slow down is due to the ineffective use of the CPU's L1/L2 cache. We present a novel approach for mitigating this by presenting the design of a single-object memory allocator that preserves memory locality across randomly ordered memory allocations and deallocations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2023

BackCache: Mitigating Contention-Based Cache Timing Attacks by Hiding Cache Line Evictions

Caches are used to reduce the speed differential between the CPU and mem...
research
07/02/2019

Cache-Friendly Search Trees; or, In Which Everything Beats std::set

While a lot of work in theoretical computer science has gone into optimi...
research
01/20/2018

Pointer-Chase Prefetcher for Linked Data Structures

Caches only exploit spatial and temporal locality in a set of address re...
research
06/01/2021

Boosting the Search Performance of B+-tree for Non-volatile Memory with Sentinels

The next-generation non-volatile memory (NVM) is striding into computer ...
research
11/04/2021

Revisiting Active Object Stores: Bringing Data Locality to the Limit With NVM

Object stores are widely used software stacks that achieve excellent sca...
research
12/05/2021

Boosting Mobile CNN Inference through Semantic Memory

Human brains are known to be capable of speeding up visual recognition o...
research
04/17/2015

The Influence of Malloc Placement on TSX Hardware Transactional Memory

The hardware transactional memory (HTM) implementation in Intel's i7-477...

Please sign up or login with your details

Forgot password? Click here to reset