Fast-track cache: a huge racetrack memory L1 data cache

International Conference on Supercomputing (ICS)(2022)

Cited 0|Views2
No score
Abstract
First-level (L1) caches have been traditionally implemented with Static Random-Access Memory (SRAM) technology, since it is the fastest memory technology, and L1 caches call for tight timing constraints in the processor pipeline. However, one of the main downsides of SRAM is its low density, which prevents L1 caches to improve their storage capacity beyond a few tens of KB. On the other hand, the recent Domain Wall Memory (DWM) technology overcomes such a constraint by arranging multiple bits in a magnetic racetrack, and sharing a header to access those bits. Accessing a bit requires a shift operation to align the target bit under the header. Such shifts increase the final access latency, which is the main reason why DWM has been mostly used to implement slow last-level caches.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined