Scalable, high-speed on-chip-based NDN name forwarding using FPGA.

ICDCN '19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING AND NETWORKING(2019)

引用 3|浏览64
暂无评分
摘要
Named Data Networking (NDN) is the most promising candidate among the proposed content-based future Internet architectures. In NDN, Forwarding Information Base (FIB) maintains name prefixes and their corresponding outgoing interface(s) and forwards incoming packets by calculating the longest prefix match (LPM) of their content names (CNs). A CN in NDN is of variable-length and is maintained using a hierarchical structure. Therefore, to perform name lookup for packet forwarding at wire speed is a challenging task. However, the use of GPUs can lead to much better lookup speeds than CPU but, they are often limited by the CPU-GPU transfer latencies. In this paper, we exploit the massive parallel processing power of FPGA technology and propose a scalable, high-speed on-chip SRAM-based NDN name forwarding scheme for FIB (OnChip-FIB) using Field-Programmable Gate Arrays (FPGA). OnChip-FIB scales well as the number of prefixes grow, due to low storage complexity and low resource utilization. Extensive simulation results show that the OnChip-FIB scheme can achieve 1.06 mu s measured lookup latency with a 26% on-chip block memory usage in a single Xilinx UltraScale FPGA for 50K named dataset.
更多
查看译文
关键词
Named Data Networking,NDN,FIB,Forwarding,Name lookup
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要