You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* VPP: Fixed syntax on vpp/dataplane/buffers.rst
- Fixed sections levels markers
- Fixed a list in `buffers-per-numa` section
* vpp: Expanded DPDK options description in interface settings
- Added more details about `num-tx-queues` calculations.
- Fixed section marks on the `configuration/dataplane/interface.rst` page.
The following parameters can be configured for VPP buffers:
19
19
20
20
buffers-per-numa
21
-
^^^^^^^^^^^^^^^^
21
+
----------------
22
22
23
23
Number of buffers allocated per NUMA node. This setting helps in optimizing memory access patterns for multi-CPU systems.
24
+
24
25
Usually it needs to be tuned if:
26
+
25
27
- there are a lot of interfaces in the system
26
28
- there are a lot of queues in NICs
27
29
- there are big descriptors size configured for NICs
30
+
28
31
The value should be set responsibly, overprovisioning can lead to issues with NICs configured with XDP driver.
29
32
30
33
.. cfgcmd:: set vpp settings buffers buffers-per-numa <value>
@@ -40,7 +43,7 @@ This should be done for each NIC, and then sum the results for all NICs in the s
40
43
Try to avoid setting this value too low to avoid packet drops.
41
44
42
45
data-size
43
-
^^^^^^^^^
46
+
---------
44
47
45
48
This value sets how much payload data can be stored in a single buffer allocated by VPP.
46
49
Making it larger can reduce buffer chains for big packets, while a smaller value can save memory for environments handling mostly small packets.
@@ -50,7 +53,7 @@ Making it larger can reduce buffer chains for big packets, while a smaller value
50
53
Optimal size depends on the typical packet size in your network. If you are not sure, use the value of biggest MTU in your network plus some overhead (e.g., 128 bytes).
51
54
52
55
page-size
53
-
^^^^^^^^^
56
+
---------
54
57
55
58
A memory pages type used for buffer allocation. Common values are 4K, 2M, or 1G.
56
59
@@ -59,7 +62,7 @@ Use pages that are configured in system settings.
59
62
.. cfgcmd:: set vpp settings buffers page-size <value>
60
63
61
64
Potential Issues and Troubleshooting
62
-
------------------------------------
65
+
====================================
63
66
64
67
Improper buffer configuration can lead to various issues, including:
The driver parameter specifies the type of driver used for the network interface. VPP supports two types of drivers that can be used for this: DPDK and XDP. The choice of driver depends on the specific use case, hardware capabilities, and performance requirements. Some NICs may support only one of these drivers.
20
21
21
22
.. cfgcmd:: set vpp settings interface <interface-name> driver <driver-type>
@@ -25,7 +26,7 @@ The DPDK driver is generally preferred for high-performance scenarios, while the
25
26
.. _vpp_config_dataplane_interface_rx_mode:
26
27
27
28
rx-mode
28
-
^^^^^^^
29
+
-------
29
30
30
31
The rx-mode parameter defines how VPP handles incoming packets on the interface. There are several modes available, each with its own advantages and use cases:
31
32
@@ -38,7 +39,7 @@ The rx-mode parameter defines how VPP handles incoming packets on the interface.
38
39
The choice of rx-mode should be based on the expected traffic patterns and performance requirements of the network environment.
39
40
40
41
dpdk-options
41
-
^^^^^^^^^^^^
42
+
------------
42
43
43
44
The dpdk-options section allows for the configuration of various DPDK-specific settings for the interface.
44
45
@@ -47,16 +48,16 @@ The dpdk-options section allows for the configuration of various DPDK-specific s
47
48
DPDK options you can configure are:
48
49
49
50
- ``num-rx-queues``: Specifies the number of receive queues for the interface. More queues can improve performance on multi-core systems by allowing parallel processing of incoming packets. Each queue will be assigned to a separate CPU core.
51
+
- ``num-tx-queues``: Specifies the number of transmit queues for the interface. Similar to receive queues, more transmit queues can enhance performance by enabling parallel processing of outgoing packets. By default, the VPP Dataplane has one TX queue per enabled CPU worker, or a single queue if no workers are configured.
50
52
51
53
.. seealso:: :doc:`cpu`
52
54
53
-
- ``num-tx-queues``: Specifies the number of transmit queues for the interface. Similar to receive queues, more transmit queues can enhance performance by enabling parallel processing of outgoing packets.
54
55
- ``num-rx-desc``: Defines the size of each receive queue. Larger queue sizes can help accommodate bursts of incoming traffic, reducing the likelihood of packet drops during high traffic periods.
55
56
- ``num-tx-desc``: Defines the size of each transmit queue. Larger sizes can help manage bursts of outgoing traffic more effectively.
56
57
- ``promisc``: Enables or disables promiscuous mode on the interface. When promiscuous mode is enabled, the interface will receive all packets on the network, regardless of type and destination of the packets. Some NICs need this feature to be enabled to avoid filtering out packets (for example to pass VLAN tagged packets).
57
58
58
59
xdp-options
59
-
^^^^^^^^^^^
60
+
-----------
60
61
61
62
The xdp-options section allows for the configuration of various XDP-specific settings for the interface.
62
63
@@ -82,4 +83,4 @@ Improper interface configuration can lead to various issues, including:
82
83
Indicators of such issues are:
83
84
84
85
- Failed commits after adding or modifying an interface settings
0 commit comments