Newer edge computing uses cases like the Internet of Things (IoT), virtualized radio access network (vRAN), multi-access edge compute (MEC) and secure access service edge (SASE) require low latency. This requires service operators and data center operators to deploy compute resources at the network edge at an increasing rate. But power consumption and hardware footprint remain hindering factors, regardless of their location: purpose-built micro data centers, traditional telco points of presence (PoPs) or converged cable access platforms (CCAPs). Meanwhile, because so many edge data centers are being deployed, operators are working under a mandate to ensure cost-effectiveness. 


Programmable smart network interface cards (SmartNICs) provide a solution to the problems of power, footprint and cost in edge data centers. Operators can offload virtual switching from server CPUs onto these SmartNICs. 


The vSwitch status quo

The traditional data center configuration uses virtual switches (vSwitches) to connect virtual machines (VMs) with both virtual and physical networks. In many use cases, a vSwitch is also required to connect “East-West” traffic between the VMs themselves, supporting applications such as advanced security, video processing or CDN. Various vSwitch implementations are available, with open virtual switch data plane development kit, known as OVS-DPDK, probably being the most widely used, due to the optimized performance that it achieves by using DPDK software libraries, as well as its availability as a standard open-source project.


Here is the difficulty from the solution architect’s perspective: because the vSwitch is a software-based function, it must run on the same server central processing units (CPUs) as the VMs. But only the VMs running applications and services ultimately generate revenue for the operator: no one gets paid for just switching network traffic. So, there’s a strong business incentive to minimize the number of cores consumed by the vSwitch to maximize the number of cores available for VMs.


For low-bandwidth use cases, this is not an issue. As an example, the most recent versions of OVS-DPDK can switch approximately 8 million packets per second (Mpps) of bidirectional network traffic, assuming 64-byte packets. So, if a VM only requires 1Mpps, then a single vSwitch core can switch enough traffic to support eight VM cores, and the switching overhead isn’t too bad. Assuming a 16-core CPU, two vSwitch cores can support 13 VM cores (assuming that one core is dedicated to management functions).


SmartNICs to the rescue

Yet imagine that the VM requires 10Mpps. In that case, more than one vSwitch core is needed to support each VM core, and more than half the CPU is being used for switching. In the 16-core scenario, eight cores must be configured to run the vSwitch while only six (consuming 60Mpps) are running VMs. And the problem just gets worse as the bandwidth requirements for each VM increase.


A data center operator would need to leverage many CPU cores to run virtual switching. That means more servers are required to support a given number of subscribers or, conversely, the number of subscribers supported by a given data center footprint is unnecessarily constrained. Power is also being consumed by running functions that generate no revenue. So, there’s a strong incentive to minimize the number of CPU cores required for virtual switching, especially for high-throughput use cases.


Programmable SmartNICs come to the rescue in this situation by offloading the OVS-DPDK fast path. Only a single CPU core is consumed to support this offload, performing the “slow path” classification of the initial packet in a flow before the remaining packets are processed in the SmartNIC. This frees up the remaining CPU cores to run VMs, significantly improving VM density (the number of VM cores per CPU).


A real-world SmartNIC example

Let’s take this concept out of the realm of theory by observing a use case: A 16-core CPU is connected to a standard NIC, where eleven vSwitch cores are required to switch the traffic consumed by three VM cores. In this situation, one CPU core remains unused as a stranded resource because there are not enough vSwitch cores available to switch the traffic to a fourth VM core. This wastage is typical for use cases where the bandwidth required for each VM is high. 


Offloading the vSwitch function to a programmable SmartNIC frees up ten additional cores that are now available to run VMs, so the VM density, in this case, increases from three to ten, an improvement of 3.3x. In both the standard NIC and SmartNIC examples, a single CPU core is reserved for running the virtual infrastructure manager (VIM), representing a typical configuration for virtualized use cases.


Using this method, data center operators can know ahead of time the level of improvement in VM density they can achieve by offloading the vSwitch to a programmable SmartNIC. It’s then straightforward to estimate the resulting cost savings over whatever timeframe is interesting to your chief financial officer. All you need to do is start with the total number of VMs that need to be hosted in the data center, and then factor in some basic assumptions about cost and power for the servers and the NICs, OPEX-vs.-CAPEX metrics, data center power utilization efficiency (PUE) and server refresh cycles.


Switching to a better approach

Server CPUs represent one of an edge data center’s biggest expenses. And these data centers are growing in number as organizations lay the groundwork for their new applications and services. This requires operators to optimize CPU usage to get the strongest return on investment. They can do this by offloading the vSwitch to a programmable SmartNIC. In so doing, they are also likely to realize significant cost savings and improvements in efficiency.