While it is clearly early in the game, VMware has made a bunch of moves recently to ensure that DPUs and the smartNICs they enable are an equal part of enterprise networking environments of the future.
VMware is a leading proponent of using digital processing units to free-up server CPU cycles by offloading networking, security, storage, and other processes in order to rapidly and efficiently supporting edge- and cloud-based workloads.
Competitors—and partners in some cases—including Intel, Nvidia, AWS, and AMD, also have plans to more tightly integrate DPU-based devices into in firewalls, gateways, enterprise load balancing, and storage-offload applications.
These include support for DPUs under the company’s flagship vSphere 8 virtualization and vSAN hyperconverged software packages. The idea is that vSphere is going to be the foundation for deploying and managing workloads and running them effectively and securely regardless of what the underlying processor technology is, said Tom Gillis, senior vice president and general manager at VMware. In the end, reduced CPU and memory overhead will lead to more efficient workload consolidation and better infrastructure performance, he said.
“When customers use a DPU to offload computing they save 10-to-20% of their server cores, so that’s the economic argument for using DPUs because in a high-density server environment, the higher your density, the more efficient the DPU becomes, but that’s just the beginning,” Gillis said.
Under vSphere 8, another feature known as DPU-based Acceleration for NSX can move networking, load balancing, and security functions to a DPU, freeing up server CPU capacity. The system would support distributed firewalls on the DPU, amplifying the security architecture without requiring software agents. The NSX acceleration came out of a VMware development effort with Nvidia, Pensando (now part of AMD), and Intel called Project Monterey.