Simultaneously Reducing Latency and Power Consumption in OpenFlow Switches

Networking, IEEE/ACM Transactions(2014)

引用 90|浏览26
暂无评分
摘要
The Ethernet switch is a primary building block for today's enterprise networks and data centers. As network technologies converge upon a single Ethernet fabric, there is ongoing pressure to improve the performance and efficiency of the switch while maintaining flexibility and a rich set of packet processing features. The OpenFlow architecture aims to provide flexibility and programmable packet processing to meet these converging needs. Of the many ways to create an OpenFlow switch, a popular choice is to make heavy use of ternary content addressable memories (TCAMs). Unfortunately, TCAMs can consume a considerable amount of power and, when used to match flows in an OpenFlow switch, put a bound on switch latency. In this paper, we propose enhancing an OpenFlow Ethernet switch with per-port packet prediction circuitry in order to simultaneously reduce latency and power consumption without sacrificing rich policy-based forwarding enabled by the OpenFlow architecture. Packet prediction exploits the temporal locality in network communications to predict the flow classification of incoming packets. When predictions are correct, latency can be reduced, and significant power savings can be achieved from bypassing the full lookup process. Simulation studies using actual network traces indicate that correct prediction rates of 97% are achievable using only a small amount of prediction circuitry per port. These studies also show that prediction circuitry can help reduce the power consumed by a lookup process that includes a TCAM by 92% and simultaneously reduce the latency of a cut-through switch by 66%.
更多
查看译文
关键词
content-addressable storage,local area networks,packet switching,table lookup,telecommunication power management,OpenFlow Ethernet switch,OpenFlow architecture,TCAM,data centers,enterprise networks,latency reduction,lookup process,packet processing features,perport packet prediction circuitry,power consumption,power saving,primary building block,programmable packet processing,ternary content addressable memories,Ethernet networks,packet switching,software defined networking
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要