diff --git a/scalability_and_performance/optimization/optimizing-networking.adoc b/scalability_and_performance/optimization/optimizing-networking.adoc index a618060f7ae6..ac9e79c43f28 100644 --- a/scalability_and_performance/optimization/optimizing-networking.adoc +++ b/scalability_and_performance/optimization/optimizing-networking.adoc @@ -10,22 +10,20 @@ The xref:../../networking/openshift_sdn/about-openshift-sdn.adoc#about-openshift xref:../../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[OVN-Kubernetes] uses Generic Network Virtualization Encapsulation (Geneve) instead of VXLAN as the tunnel protocol. This network can be tuned by using network interface controller (NIC) offloads. -VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. +Cloud, virtual, and bare-metal environments running {product-title} can use a high percentage of a NIC's capabilities with minimal tuning. Production clusters using OVN-Kubernetes with Geneve tunneling can handle high-throughput traffic effectively and scale up (for example, utilizing 100 Gbps NICs) and scale out (for example, adding more NICs) without requiring special configuration. -VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and -inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional -processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. +In some high-performance scenarios where maximum efficiency is critical, targeted performance tuning can help optimize CPU usage, reduce overhead, and ensure that you are making full use of the NIC's capabilities. -Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or {product-title}. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation. +For environments where maximum throughput and CPU efficiency are critical, you can further optimize performance with the following strategies: -If you are looking to push beyond one Gbps, you can: +* Validate network performance using tools such as `iPerf3` and `k8s-netperf`. These tools allow you to benchmark throughput, latency, and packets-per-second (PPS) across pod and node interfaces. -* Evaluate network plugins that implement different routing techniques, such as border gateway protocol (BGP). -* Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. +* Evaluate OVN-Kubernetes User Defined Networking (UDN) routing techniques, such as border gateway protocol (BGP). -VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. +* Use Geneve-offload capable network adapters. Geneve-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to use the full bandwidth of their network infrastructure. // Optimizing the MTU for your network + include::modules/optimizing-mtu-networking.adoc[leveloffset=+1] [role="_additional-resources"]