Skip to content

Commit 59fcd64

Browse files
authored
VPP: Syntax fixes and improvements (#1683)
* VPP: Fixed syntax on vpp/dataplane/buffers.rst - Fixed sections levels markers - Fixed a list in `buffers-per-numa` section * vpp: Expanded DPDK options description in interface settings - Added more details about `num-tx-queues` calculations. - Fixed section marks on the `configuration/dataplane/interface.rst` page.
1 parent 087ed8b commit 59fcd64

File tree

2 files changed

+14
-10
lines changed

2 files changed

+14
-10
lines changed

docs/vpp/configuration/dataplane/buffers.rst

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,16 @@ Buffer Configuration Parameters
1818
The following parameters can be configured for VPP buffers:
1919

2020
buffers-per-numa
21-
^^^^^^^^^^^^^^^^
21+
----------------
2222

2323
Number of buffers allocated per NUMA node. This setting helps in optimizing memory access patterns for multi-CPU systems.
24+
2425
Usually it needs to be tuned if:
26+
2527
- there are a lot of interfaces in the system
2628
- there are a lot of queues in NICs
2729
- there are big descriptors size configured for NICs
30+
2831
The value should be set responsibly, overprovisioning can lead to issues with NICs configured with XDP driver.
2932

3033
.. cfgcmd:: set vpp settings buffers buffers-per-numa <value>
@@ -40,7 +43,7 @@ This should be done for each NIC, and then sum the results for all NICs in the s
4043
Try to avoid setting this value too low to avoid packet drops.
4144

4245
data-size
43-
^^^^^^^^^
46+
---------
4447

4548
This value sets how much payload data can be stored in a single buffer allocated by VPP.
4649
Making it larger can reduce buffer chains for big packets, while a smaller value can save memory for environments handling mostly small packets.
@@ -50,7 +53,7 @@ Making it larger can reduce buffer chains for big packets, while a smaller value
5053
Optimal size depends on the typical packet size in your network. If you are not sure, use the value of biggest MTU in your network plus some overhead (e.g., 128 bytes).
5154

5255
page-size
53-
^^^^^^^^^
56+
---------
5457

5558
A memory pages type used for buffer allocation. Common values are 4K, 2M, or 1G.
5659

@@ -59,7 +62,7 @@ Use pages that are configured in system settings.
5962
.. cfgcmd:: set vpp settings buffers page-size <value>
6063

6164
Potential Issues and Troubleshooting
62-
------------------------------------
65+
====================================
6366

6467
Improper buffer configuration can lead to various issues, including:
6568

docs/vpp/configuration/dataplane/interface.rst

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,8 @@ Interface Configuration Parameters
1515
==================================
1616

1717
driver
18-
^^^^^^
18+
------
19+
1920
The driver parameter specifies the type of driver used for the network interface. VPP supports two types of drivers that can be used for this: DPDK and XDP. The choice of driver depends on the specific use case, hardware capabilities, and performance requirements. Some NICs may support only one of these drivers.
2021

2122
.. cfgcmd:: set vpp settings interface <interface-name> driver <driver-type>
@@ -25,7 +26,7 @@ The DPDK driver is generally preferred for high-performance scenarios, while the
2526
.. _vpp_config_dataplane_interface_rx_mode:
2627

2728
rx-mode
28-
^^^^^^^
29+
-------
2930

3031
The rx-mode parameter defines how VPP handles incoming packets on the interface. There are several modes available, each with its own advantages and use cases:
3132

@@ -38,7 +39,7 @@ The rx-mode parameter defines how VPP handles incoming packets on the interface.
3839
The choice of rx-mode should be based on the expected traffic patterns and performance requirements of the network environment.
3940

4041
dpdk-options
41-
^^^^^^^^^^^^
42+
------------
4243

4344
The dpdk-options section allows for the configuration of various DPDK-specific settings for the interface.
4445

@@ -47,16 +48,16 @@ The dpdk-options section allows for the configuration of various DPDK-specific s
4748
DPDK options you can configure are:
4849

4950
- ``num-rx-queues``: Specifies the number of receive queues for the interface. More queues can improve performance on multi-core systems by allowing parallel processing of incoming packets. Each queue will be assigned to a separate CPU core.
51+
- ``num-tx-queues``: Specifies the number of transmit queues for the interface. Similar to receive queues, more transmit queues can enhance performance by enabling parallel processing of outgoing packets. By default, the VPP Dataplane has one TX queue per enabled CPU worker, or a single queue if no workers are configured.
5052

5153
.. seealso:: :doc:`cpu`
5254

53-
- ``num-tx-queues``: Specifies the number of transmit queues for the interface. Similar to receive queues, more transmit queues can enhance performance by enabling parallel processing of outgoing packets.
5455
- ``num-rx-desc``: Defines the size of each receive queue. Larger queue sizes can help accommodate bursts of incoming traffic, reducing the likelihood of packet drops during high traffic periods.
5556
- ``num-tx-desc``: Defines the size of each transmit queue. Larger sizes can help manage bursts of outgoing traffic more effectively.
5657
- ``promisc``: Enables or disables promiscuous mode on the interface. When promiscuous mode is enabled, the interface will receive all packets on the network, regardless of type and destination of the packets. Some NICs need this feature to be enabled to avoid filtering out packets (for example to pass VLAN tagged packets).
5758

5859
xdp-options
59-
^^^^^^^^^^^
60+
-----------
6061

6162
The xdp-options section allows for the configuration of various XDP-specific settings for the interface.
6263

@@ -82,4 +83,4 @@ Improper interface configuration can lead to various issues, including:
8283
Indicators of such issues are:
8384

8485
- Failed commits after adding or modifying an interface settings
85-
- Low throughput or high latency on the interface
86+
- Low throughput or high latency on the interface

0 commit comments

Comments
 (0)