Cache Bypassing for Machine Learning Algorithms

02/13/2021
by   Asim Ikram, et al.
0

Graphics Processing Units (GPUs) were once used solely for graphical computation tasks but with the increase in the use of machine learning applications, the use of GPUs to perform general-purpose computing has increased in the last few years. GPUs employ a massive amount of threads, that in turn achieve a high amount of parallelism, to perform tasks. Though GPUs have a high amount of computation power, they face the problem of cache contention due to the SIMT model that they use. A solution to this problem is called "cache bypassing". This paper presents a predictive model that analyzes the access patterns of various machine learning algorithms and determines whether certain data should be stored in the cache or not. It presents insights on how well each model performs on different datasets and also shows how minimizing the size of each model will affect its performance The performance of most of the models were found to be around 90 but not with the smallest size. We further increase the features by splitting the addresses into chunks of 4 bytes. We observe that this increased the performance of the neural network substantially and increased the accuracy to 99.9

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset