Skip to main content

Configuration

You can connect Ethernet interfaces to the VPP dataplane using two drivers - DPDK or XDP. DPDK is preferred if your NIC is supported.

An example:

set vpp settings interface eth1 driver dpdk
set vpp settings interface eth1 driver xdp

Default values should be OK for testing, but of course more specific settings are required for best performance. At this moment, this is out of the topic here.

If you want to enable routing between interfaces connected to the kernel and VPP, configure the next option and reboot the router:

set vpp settings lcp route-no-paths

Ethernet interfaces moved to VPP still should be configured from the usual place: set interfaces ethernet.

You can configure additional types of interfaces using set vpp interfaces. By default, they will be added to VPP dataplane only. If you want to see them in the kernel as well, you must configure an additional kernel interface counterpart and connect it to VPP:

set vpp interfaces vxlan vxlan10 kernel-interface 'vpptap10'
set vpp interfaces vxlan vxlan10 remote '192.0.2.2'
set vpp interfaces vxlan vxlan10 source-address '192.0.2.1'
set vpp interfaces vxlan vxlan10 vni '10'
set vpp kernel-interfaces vpptap10 address '198.51.100.1/24'

Big routing tables

if you have a big routing table, like a full-view, you need to change memory settings to fit it into FIB. Here is an example for 1M IPv4 routes:

set system sysctl parameter net.core.rmem_default value '134217728'
set system sysctl parameter net.core.rmem_max value '536870912'
set system sysctl parameter net.core.wmem_default value '134217728'
set system sysctl parameter net.core.wmem_max value '536870912'


set vpp settings host-resources max-map-count '6176'
set vpp settings host-resources nr-hugepages '3072'
set vpp settings host-resources shmmax '6442450944'
set vpp settings lcp netlink rx-buffer-size '536870912'
set vpp settings memory main-heap-page-size 'default-hugepage'
set vpp settings memory main-heap-size '4G'
set vpp settings statseg page-size 'default-hugepage'
set vpp settings statseg size '256M'

You may need to increase values according to the situation.

RX modes, performance, and CPU usage

Ethernet and kernel interfaces support three types of RX packet processing:

  • polling

    This mode provides the best performance level and stable latency, but it will utilize 100% of attached CPU cores.

  • interrupt

    Classical interrupt-based RX packet processing. Performance may vary depending on interrupt generation logic in NIC and traffic profile. CPU will be used on a demand basis.

  • hybrid

    Use interrupt when the traffic level is low, and switch to polling automatically when the traffic level grows. Provides optimal compromise between CPU usage and performance.

Interrupt and hybrid modes may be not available on all hardware.