Enabling Novel In-Memory Computation Algorithms to Address Next-Generation Throughput Constraints on SWaP- Limited Platforms

2022 IEEE High Performance Extreme Computing Conference (HPEC)(2022)

引用 0|浏览2
暂无评分
摘要
The Department of Defense relies heavily on filtering and selection applications to help manage the overwhelming amount of data constantly received at the tactical edge. Filtering and selection are both latency and throughput constrained, and systems at the tactical edge must heavily optimize their SWaP (size, weight, and power) usage, which can reduce overall compu-tation and memory performance. In-memory computation (IMC) provides a promising solution to the latency and throughput issues, as it helps enable the efficient processing of data as it is received, helping eliminate the memory bottleneck imposed by traditional Von Neumann architectures. In this paper, we discuss a specific type of IMC accelerator known as a Content Addressable Memory (CAM), which effectively operates as a hardware-based associative array, allowing fast lookup and match operations. In particular, we consider ternary CAMs (TCAMs) and their use within string matching, which are an important component of many filtering and se-lection applications. Despite the benefits gained with TCAMs, designing applications that utilize them remains a difficult task. Straightforward questions, such as “how large should my TCAM be?” and “what is the expected throughput?” are difficult to answer due to the many factors that go into effectively mapping data into a TCAM. This work aims to help answer these types of questions with a new framework called Stardust-Chicken. Stardust-Chicken supports generating and simulating TCAMs, and implements state-of-the-art algorithms and data representations that can effectively map data into TCAMs. With Stardust-Chicken, users can explore the tradeoff space that comes with TCAMs and better understand how to utilize them in their applications.
更多
查看译文
关键词
in-memory computation,content addressable memory,string matching
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要