作者: Paul T. Congdon , Prasant Mohapatra , Matthew Farrens , Venkatesh Akella
DOI: 10.1109/TNET.2013.2270436
关键词:
摘要: The Ethernet switch is a primary building block for today's enterprise networks and data centers. As network technologies converge upon single fabric, there ongoing pressure to improve the performance efficiency of while maintaining flexibility rich set packet processing features. OpenFlow architecture aims provide programmable meet these converging needs. Of many ways create an switch, popular choice make heavy use ternary content addressable memories (TCAMs). Unfortunately, TCAMs can consume considerable amount power and, when used match flows in put bound on latency. In this paper, we propose enhancing with per-port prediction circuitry order simultaneously reduce latency consumption without sacrificing policy-based forwarding enabled by architecture. Packet exploits temporal locality communications predict flow classification incoming packets. When predictions are correct, be reduced, significant savings achieved from bypassing full lookup process. Simulation studies using actual traces indicate that correct rates 97% achievable only small per port. These also show help consumed process includes TCAM 92% cut-through 66%.