We share the Intel Network Builders program vision of transforming the network through innovative ecosystem solutions. As members we support the new Intel Network Builders Fast Track initiative by optimizing solutions based on Intel Architecture.
We are excited to announce that at SDN & OpenFlow World Congress, HP in partnership with Intel, will be demonstrating the next milestone in software defined networking (SDN)/network functions virtualization (NFV) – software based OpenFlow Switch cluster that supports a total of 1 Tbps of throughput with 1 Billion OpenFlow rules.
These are big numbers, but why should Communication Service Providers (CSPs) really care?
First, this proves that SDN/NFV can scale to carrier needs.
We’ve heard doubts from many CSPs time and again: “Sure, NFV and SDN are all cool technologies and we understand the benefits they bring. But what about performance? Can they be used where we need both scale AND performance?” This is why it was important for us to demonstrate that SDN/NFV can be used in high performance demanding use cases. Past NFV environments focused on functionality and they left a big question mark whether NFV and SDN can actually meet scale and performance. In this case we show a full setup with 6 servers. Such a large number of flows isn’t supported on hardware switches and even large routers have a hard time supporting 1B flows. Here we are showing that SDN can support this using standard off-the-shelf servers.
Second, it enhances agility and optimizes resource usage – two things that CSPs are looking up to SDN and NFV to deliver.
Agility and maximizing resource utilization are the primary reasons that CSPs feel the need to shift from proprietary hardware based network functions to a software centric world. Today operators deploy network functions at distributed Points of Presence (PoPs) across their footprint. A significant portion of these elements are involved in the transport of information from one point to another – these elements deal with varying traffic patterns (e.g. e-commerce traffic in United States may be on its peak on Black Monday and voice traffic on Mother’s day). These hardware functions get sized and deployed for peak capacity and only on few occasions used to full capacity.
If capacity is needed in a given location, unused capacity in other parts of the network cannot be utilized because the functional entity/license (largely implemented in software) is tied to a remote hardware element. Moreover if the service provider wants to add any value to the traffic path, a function has to be inserted in the traffic path. Introducing a function optionally is not simple, as the traffic forwarding infrastructure is designed for shortest path destination forwarding.
SDN and NFV allow functions can be deployed in smaller sizes, with instances that can be added and removed based on traffic patterns. However to do such a large aggregated traffic stream, the system needs to be able to look at the component streams that make up the aggregate and be able to forward them to/from functions in the network service path. This is why operators require a switching solution that can be programmed to deal with a lot of traffic at a granular level.
Today many hardware switches are limited by Content Addressable Memory Table sizes and cannot approach the requirements of needed granularity in a CSP environment. With this demonstration, HP and Intel are showing how this can be made possible using software on standard Intel-based hardware platforms. With a throughput of 1 Tbps with 1 Billion OpenFlow rules, the system can be put to use in the most challenging carrier environments in both broadband and mobile networks. We are moving SDN and NFV to the scales that were only available with proprietary systems.
The setup uses OpenStack and ContexSwitch, HP ConteXtream’s OpenFlow software based-switching component.
ContexSwitch uses Data Plane Development Kit (DPDK) to achieve high performance and supports a large number of OpenFlow rules. The demonstration involves 24 X 40 G ports distributed across 6 Intel servers with Intel® Xeon® Processors E5 v3. Each server has 2 sockets, each with 18 cores. Configured with hyper threading we get a total of 72 HT cores per server. Using the core pinning found in OpenStack the cores are pinned for the software switch. The setup interconnects with the test generation environment using two HP 5930 switches (1U switches).
ContexSwitch uses Data Plane Development Kit (DPDK) to achieve high performance and supports a large number of OpenFlow rules. The demonstration involves 24 X 40 G ports distributed across 6 Intel servers with Intel® Xeon® Processors E5 v3. Each server has 2 sockets, each with 18 cores. Configured with hyper threading we get a total of 72 HT cores per server. Using the core pinning found in OpenStack the cores are pinned for the software switch. The setup interconnects with the test generation environment using two HP 5930 switches (1U switches).