1 Using Open vSwitch with DPDK
2 ============================
4 Open vSwitch can use Intel(R) DPDK lib to operate entirely in
5 userspace. This file explains how to install and use Open vSwitch in
8 The DPDK support of Open vSwitch is considered experimental.
9 It has not been thoroughly tested.
11 This version of Open vSwitch should be built manually with `configure`
14 OVS needs a system with 1GB hugepages support.
16 Building and Installing:
17 ------------------------
20 Optional (if building with vhost-cuse): `fuse`, `fuse-devel` (`libfuse-dev`
23 1. Configure build & install DPDK:
27 export DPDK_DIR=/usr/src/dpdk-16.04
31 2. Then run `make install` to build and install the library.
32 For default install without IVSHMEM:
34 `make install T=x86_64-native-linuxapp-gcc DESTDIR=install`
36 To include IVSHMEM (shared memory):
38 `make install T=x86_64-ivshmem-linuxapp-gcc DESTDIR=install`
40 For further details refer to http://dpdk.org/
42 2. Configure & build the Linux kernel:
44 Refer to intel-dpdk-getting-started-guide.pdf for understanding
45 DPDK kernel requirement.
47 3. Configure & build OVS:
51 `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/`
55 `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/`
60 ./configure --with-dpdk=$DPDK_BUILD [CFLAGS="-g -O2 -Wno-cast-align"]
64 Note: 'clang' users may specify the '-Wno-cast-align' flag to suppress DPDK cast-align warnings.
66 To have better performance one can enable aggressive compiler optimizations and
67 use the special instructions(popcnt, crc32) that may not be available on all
68 machines. Instead of typing `make`, type:
70 `make CFLAGS='-O3 -march=native'`
72 Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS.
74 Using the DPDK with ovs-vswitchd:
75 ---------------------------------
78 Add the following options to the kernel bootline:
80 `default_hugepagesz=1GB hugepagesz=1G hugepages=1`
82 2. Setup DPDK devices:
84 DPDK devices can be setup using either the VFIO (for DPDK 1.7+) or UIO
85 modules. UIO requires inserting an out of tree driver igb_uio.ko that is
86 available in DPDK. Setup for both methods are described below.
89 1. insert uio.ko: `modprobe uio`
90 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko`
91 3. Bind network device to igb_uio:
92 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1`
96 VFIO needs to be supported in the kernel and the BIOS. More information
97 can be found in the [DPDK Linux GSG].
99 1. Insert vfio-pci.ko: `modprobe vfio-pci`
100 2. Set correct permissions on vfio device: `sudo /usr/bin/chmod a+x /dev/vfio`
101 and: `sudo /usr/bin/chmod 0666 /dev/vfio/*`
102 3. Bind network device to vfio-pci:
103 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=vfio-pci eth1`
105 3. Mount the hugetable filesystem
107 `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`
109 Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup.
111 4. Follow the instructions in [INSTALL.md] to install only the
112 userspace daemons and utilities (via 'make install').
113 1. First time only db creation (or clearing):
116 mkdir -p /usr/local/etc/openvswitch
117 mkdir -p /usr/local/var/run/openvswitch
118 rm /usr/local/etc/openvswitch/conf.db
119 ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
120 /usr/local/share/openvswitch/vswitch.ovsschema
123 2. Start ovsdb-server
126 ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
127 --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
128 --private-key=db:Open_vSwitch,SSL,private_key \
129 --certificate=Open_vSwitch,SSL,certificate \
130 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
133 3. First time after db creation, initialize:
136 ovs-vsctl --no-wait init
141 DPDK configuration arguments can be passed to vswitchd via Open_vSwitch
142 other_config column. The recognized configuration options are listed.
143 Defaults will be provided for all values not explicitly set.
146 Specifies whether OVS should initialize and support DPDK ports. This is
147 a boolean, and defaults to false.
150 Specifies the CPU cores on which dpdk lcore threads should be spawned.
151 The DPDK lcore threads are used for DPDK library tasks, such as
152 library internal message processing, logging, etc. Value should be in
153 the form of a hex string (so '0x123') similar to the 'taskset' mask
155 If not specified, the value will be determined by choosing the lowest
156 CPU core from initial cpu affinity list. Otherwise, the value will be
157 passed directly to the DPDK library.
158 For performance reasons, it is best to set this to a single core on
159 the system, rather than allow lcore threads to float.
162 This sets the total memory to preallocate from hugepages regardless of
163 processor socket. It is recommended to use dpdk-socket-mem instead.
166 Comma separated list of memory to pre-allocate from hugepages on specific
170 Directory where hugetlbfs is mounted
173 Option to set the vhost_cuse character device name.
176 Option to set the path to the vhost_user unix socket files.
178 NOTE: Changing any of these options requires restarting the ovs-vswitchd
181 Open vSwitch can be started as normal. DPDK will be initialized as long
182 as the dpdk-init option has been set to 'true'.
186 export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
187 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
188 ovs-vswitchd unix:$DB_SOCK --pidfile --detach
191 If allocated more than one GB hugepage (as for IVSHMEM), set amount and
192 use NUMA node 0 memory:
195 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,0"
196 ovs-vswitchd unix:$DB_SOCK --pidfile --detach
199 6. Add bridge & ports
201 To use ovs-vswitchd with DPDK, create a bridge with datapath_type
202 "netdev" in the configuration database. For example:
204 `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`
206 Now you can add dpdk devices. OVS expects DPDK device names to start with
207 "dpdk" and end with a portid. vswitchd should print (in the log file) the
208 number of dpdk devices found.
211 ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
212 ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
215 Once first DPDK port is added to vswitchd, it creates a Polling thread and
216 polls dpdk device in continuous loop. Therefore CPU utilization
217 for that thread is always 100%.
219 Note: creating bonds of DPDK interfaces is slightly different to creating
220 bonds of system interfaces. For DPDK, the interface type must be explicitly
224 ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 -- set Interface dpdk0 type=dpdk -- set Interface dpdk1 type=dpdk
229 Test flow script across NICs (assuming ovs in /usr/src/ovs):
234 # Move to command directory
235 cd /usr/src/ovs/utilities/
237 # Clear current flows
238 ./ovs-ofctl del-flows br0
240 # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
241 ./ovs-ofctl add-flow br0 in_port=1,action=output:2
242 ./ovs-ofctl add-flow br0 in_port=2,action=output:1
247 Assuming you have a vhost-user port transmitting traffic consisting of
248 packets of size 64 bytes, the following command would limit the egress
249 transmission rate of the port to ~1,000,000 packets per second:
251 `ovs-vsctl set port vhost-user0 qos=@newqos -- --id=@newqos create qos
252 type=egress-policer other-config:cir=46000000 other-config:cbs=2048`
254 To examine the QoS configuration of the port:
256 `ovs-appctl -t ovs-vswitchd qos/show vhost-user0`
258 To clear the QoS configuration from the port and ovsdb use the following:
260 `ovs-vsctl destroy QoS vhost-user0 -- clear Port vhost-user0 qos`
262 For more details regarding egress-policer parameters please refer to the
268 1. PMD affinitization
270 A poll mode driver (pmd) thread handles the I/O of all DPDK
271 interfaces assigned to it. A pmd thread will busy loop through
272 the assigned port/rxq's polling for packets, switch the packets
273 and send to a tx port if required. Typically, it is found that
274 a pmd thread is CPU bound, meaning that the greater the CPU
275 occupancy the pmd thread can get, the better the performance. To
276 that end, it is good practice to ensure that a pmd thread has as
277 many cycles on a core available to it as possible. This can be
278 achieved by affinitizing the pmd thread with a core that has no
279 other workload. See section 7 below for a description of how to
280 isolate cores for this purpose also.
282 The following command can be used to specify the affinity of the
285 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>`
287 By setting a bit in the mask, a pmd thread is created and pinned
288 to the corresponding CPU core. e.g. to run a pmd thread on core 1
290 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=2`
292 For more information, please refer to the Open_vSwitch TABLE section in
294 `man ovs-vswitchd.conf.db`
296 Note, that a pmd thread on a NUMA node is only created if there is
297 at least one DPDK interface from that NUMA node added to OVS.
299 2. Multiple poll mode driver threads
301 With pmd multi-threading support, OVS creates one pmd thread
302 for each NUMA node by default. However, it can be seen that in cases
303 where there are multiple ports/rxq's producing traffic, performance
304 can be improved by creating multiple pmd threads running on separate
305 cores. These pmd threads can then share the workload by each being
306 responsible for different ports/rxq's. Assignment of ports/rxq's to
307 pmd threads is done automatically.
309 The following command can be used to specify the affinity of the
312 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>`
314 A set bit in the mask means a pmd thread is created and pinned
315 to the corresponding CPU core. e.g. to run pmd threads on core 1 and 2
317 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=6`
319 For more information, please refer to the Open_vSwitch TABLE section in
321 `man ovs-vswitchd.conf.db`
323 For example, when using dpdk and dpdkvhostuser ports in a bi-directional
324 VM loopback as shown below, spreading the workload over 2 or 4 pmd
325 threads shows significant improvements as there will be more total CPU
328 NIC port0 <-> OVS <-> VM <-> OVS <-> NIC port 1
330 The following command can be used to confirm that the port/rxq assignment
331 to pmd threads is as required:
333 `ovs-appctl dpif-netdev/pmd-rxq-show`
335 This can also be checked with:
339 taskset -p <pid_of_pmd>
342 To understand where most of the pmd thread time is spent and whether the
343 caches are being utilized, these commands can be used:
346 # Clear previous stats
347 ovs-appctl dpif-netdev/pmd-stats-clear
349 # Check current stats
350 ovs-appctl dpif-netdev/pmd-stats-show
353 3. DPDK port Rx Queues
355 `ovs-vsctl set Interface <DPDK interface> options:n_rxq=<integer>`
357 The command above sets the number of rx queues for DPDK interface.
358 The rx queues are assigned to pmd threads on the same NUMA node in a
359 round-robin fashion. For more information, please refer to the
360 Open_vSwitch TABLE section in
362 `man ovs-vswitchd.conf.db`
366 Each pmd thread contains one EMC. After initial flow setup in the
367 datapath, the EMC contains a single table and provides the lowest level
368 (fastest) switching for DPDK ports. If there is a miss in the EMC then
369 the next level where switching will occur is the datapath classifier.
370 Missing in the EMC and looking up in the datapath classifier incurs a
371 significant performance penalty. If lookup misses occur in the EMC
372 because it is too small to handle the number of flows, its size can
373 be increased. The EMC size can be modified by editing the define
374 EM_FLOW_HASH_SHIFT in lib/dpif-netdev.c.
376 As mentioned above an EMC is per pmd thread. So an alternative way of
377 increasing the aggregate amount of possible flow entries in EMC and
378 avoiding datapath classifier lookups is to have multiple pmd threads
379 running. This can be done as described in section 2.
383 The default compiler optimization level is '-O2'. Changing this to
384 more aggressive compiler optimizations such as '-O3' or
385 '-Ofast -march=native' with gcc can produce performance gains.
387 6. Simultaneous Multithreading (SMT)
389 With SMT enabled, one physical core appears as two logical cores
390 which can improve performance.
392 SMT can be utilized to add additional pmd threads without consuming
393 additional physical cores. Additional pmd threads may be added in the
394 same manner as described in section 2. If trying to minimize the use
395 of physical cores for pmd threads, care must be taken to set the
396 correct bits in the pmd-cpu-mask to ensure that the pmd threads are
397 pinned to SMT siblings.
399 For example, when using 2x 10 core processors in a dual socket system
400 with HT enabled, /proc/cpuinfo will report 40 logical cores. To use
401 two logical cores which share the same physical core for pmd threads,
402 the following command can be used to identify a pair of logical cores.
404 `cat /sys/devices/system/cpu/cpuN/topology/thread_siblings_list`
406 where N is the logical core number. In this example, it would show that
407 cores 1 and 21 share the same physical core. The pmd-cpu-mask to enable
408 two pmd threads running on these two logical cores (one physical core)
411 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=100002`
413 Note that SMT is enabled by the Hyper-Threading section in the
414 BIOS, and as such will apply to the whole system. So the impact of
415 enabling/disabling it for the whole system should be considered
416 e.g. If workloads on the system can scale across multiple cores,
417 SMT may very beneficial. However, if they do not and perform best
418 on a single physical core, SMT may not be beneficial.
420 7. The isolcpus kernel boot parameter
422 isolcpus can be used on the kernel bootline to isolate cores from the
423 kernel scheduler and hence dedicate them to OVS or other packet
424 forwarding related workloads. For example a Linux kernel boot-line
427 'GRUB_CMDLINE_LINUX_DEFAULT="quiet hugepagesz=1G hugepages=4 default_hugepagesz=1G 'intel_iommu=off' isolcpus=1-19"'
429 8. NUMA/Cluster On Die
431 Ideally inter NUMA datapaths should be avoided where possible as packets
432 will go across QPI and there may be a slight performance penalty when
433 compared with intra NUMA datapaths. On Intel Xeon Processor E5 v3,
434 Cluster On Die is introduced on models that have 10 cores or more.
435 This makes it possible to logically split a socket into two NUMA regions
436 and again it is preferred where possible to keep critical datapaths
437 within the one cluster.
439 It is good practice to ensure that threads that are in the datapath are
440 pinned to cores in the same NUMA area. e.g. pmd threads and QEMU vCPUs
441 responsible for forwarding.
443 9. Rx Mergeable buffers
445 Rx Mergeable buffers is a virtio feature that allows chaining of multiple
446 virtio descriptors to handle large packet sizes. As such, large packets
447 are handled by reserving and chaining multiple free descriptors
448 together. Mergeable buffer support is negotiated between the virtio
449 driver and virtio device and is supported by the DPDK vhost library.
450 This behavior is typically supported and enabled by default, however
451 in the case where the user knows that rx mergeable buffers are not needed
452 i.e. jumbo frames are not needed, it can be forced off by adding
453 mrg_rxbuf=off to the QEMU command line options. By not reserving multiple
454 chains of descriptors it will make more individual virtio descriptors
455 available for rx to the guest using dpdkvhost ports and this can improve
458 10. Packet processing in the guest
460 It is good practice whether simply forwarding packets from one
461 interface to another or more complex packet processing in the guest,
462 to ensure that the thread performing this work has as much CPU
463 occupancy as possible. For example when the DPDK sample application
464 `testpmd` is used to forward packets in the guest, multiple QEMU vCPU
465 threads can be created. Taskset can then be used to affinitize the
466 vCPU thread responsible for forwarding to a dedicated core not used
467 for other general processing on the host system.
469 11. DPDK virtio pmd in the guest
471 dpdkvhostcuse or dpdkvhostuser ports can be used to accelerate the path
472 to the guest using the DPDK vhost library. This library is compatible with
473 virtio-net drivers in the guest but significantly better performance can
474 be observed when using the DPDK virtio pmd driver in the guest. The DPDK
475 `testpmd` application can be used in the guest as an example application
476 that forwards packet from one DPDK vhost port to another. An example of
477 running `testpmd` in the guest can be seen here.
479 `./testpmd -c 0x3 -n 4 --socket-mem 512 -- --burst=64 -i --txqflags=0xf00 --disable-hw-vlan --forward-mode=io --auto-start`
481 See below information on dpdkvhostcuse and dpdkvhostuser ports.
482 See [DPDK Docs] for more information on `testpmd`.
489 Following the steps above to create a bridge, you can now add dpdk rings
490 as a port to the vswitch. OVS will expect the DPDK ring device name to
491 start with dpdkr and end with a portid.
493 `ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr`
495 DPDK rings client test application
497 Included in the test directory is a sample DPDK application for testing
498 the rings. This is from the base dpdk directory and modified to work
499 with the ring naming used within ovs.
501 location tests/ovs_client
506 cd /usr/src/ovs/tests/
507 ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr"
510 In the case of the dpdkr example above the "port id you gave dpdkr" is 0.
512 It is essential to have --proc-type=secondary
514 The application simply receives an mbuf on the receive queue of the
515 ethernet ring and then places that same mbuf on the transmit ring of
516 the ethernet ring. It is a trivial loopback application.
518 DPDK rings in VM (IVSHMEM shared memory communications)
519 -------------------------------------------------------
521 In addition to executing the client in the host, you can execute it within
522 a guest VM. To do so you will need a patched qemu. You can download the
523 patch and getting started guide at :
525 https://01.org/packet-processing/downloads
527 A general rule of thumb for better performance is that the client
528 application should not be assigned the same dpdk core mask "-c" as
534 DPDK 16.04 supports two types of vhost:
539 Whatever type of vhost is enabled in the DPDK build specified, is the type
540 that will be enabled in OVS. By default, vhost-user is enabled in DPDK.
541 Therefore, unless vhost-cuse has been enabled in DPDK, vhost-user ports
542 will be enabled in OVS.
543 Please note that support for vhost-cuse is intended to be deprecated in OVS
549 The following sections describe the use of vhost-user 'dpdkvhostuser' ports
552 DPDK vhost-user Prerequisites:
553 -------------------------
555 1. DPDK 16.04 with vhost support enabled as documented in the "Building and
558 2. QEMU version v2.1.0+
560 QEMU v2.1.0 will suffice, but it is recommended to use v2.2.0 if providing
561 your VM with memory greater than 1GB due to potential issues with memory
562 mapping larger areas.
564 Adding DPDK vhost-user ports to the Switch:
565 --------------------------------------
567 Following the steps above to create a bridge, you can now add DPDK vhost-user
568 as a port to the vswitch. Unlike DPDK ring ports, DPDK vhost-user ports can
569 have arbitrary names, except that forward and backward slashes are prohibited
572 - For vhost-user, the name of the port type is `dpdkvhostuser`
575 ovs-vsctl add-port br0 vhost-user-1 -- set Interface vhost-user-1
579 This action creates a socket located at
580 `/usr/local/var/run/openvswitch/vhost-user-1`, which you must provide
581 to your VM on the QEMU command line. More instructions on this can be
582 found in the next section "DPDK vhost-user VM configuration"
583 - If you wish for the vhost-user sockets to be created in a directory other
584 than `/usr/local/var/run/openvswitch`, you may specify another location
585 in the ovsdb like so:
587 `./utilities/ovs-vsctl --no-wait \
588 set Open_vSwitch . other_config:vhost-sock-dir=path`
590 DPDK vhost-user VM configuration:
591 ---------------------------------
592 Follow the steps below to attach vhost-user port(s) to a VM.
594 1. Configure sockets.
595 Pass the following parameters to QEMU to attach a vhost-user device:
598 -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user-1
599 -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce
600 -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1
603 ...where vhost-user-1 is the name of the vhost-user port added
605 Repeat the above parameters for multiple devices, changing the
606 chardev path and id as necessary. Note that a separate and different
607 chardev path needs to be specified for each vhost-user device. For
608 example you have a second vhost-user port named 'vhost-user-2', you
609 append your QEMU command line with an additional set of parameters:
612 -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2
613 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce
614 -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2
617 2. Configure huge pages.
618 QEMU must allocate the VM's memory on hugetlbfs. vhost-user ports access
619 a virtio-net device's virtual rings and packet buffers mapping the VM's
620 physical memory on hugetlbfs. To enable vhost-user ports to map the VM's
621 memory into their process address space, pass the following paramters
625 -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,
627 -numa node,memdev=mem -mem-prealloc
630 3. Optional: Enable multiqueue support
631 The vhost-user interface must be configured in Open vSwitch with the
632 desired amount of queues with:
635 ovs-vsctl set Interface vhost-user-2 options:n_rxq=<requested queues>
638 QEMU needs to be configured as well.
639 The $q below should match the queues requested in OVS (if $q is more,
640 packets will not be received).
641 The $v is the number of vectors, which is '$q x 2 + 2'.
644 -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2
645 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce,queues=$q
646 -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mq=on,vectors=$v
649 If one wishes to use multiple queues for an interface in the guest, the
650 driver in the guest operating system must be configured to do so. It is
651 recommended that the number of queues configured be equal to '$q'.
653 For example, this can be done for the Linux kernel virtio-net driver with:
656 ethtool -L <DEV> combined <$q>
659 A note on the command above:
661 `-L`: Changes the numbers of channels of the specified network device
663 `combined`: Changes the number of multi-purpose channels.
668 The following sections describe the use of vhost-cuse 'dpdkvhostcuse' ports
671 DPDK vhost-cuse Prerequisites:
672 -------------------------
674 1. DPDK 16.04 with vhost support enabled as documented in the "Building and
676 As an additional step, you must enable vhost-cuse in DPDK by setting the
677 following additional flag in `config/common_base`:
679 `CONFIG_RTE_LIBRTE_VHOST_USER=n`
681 Following this, rebuild DPDK as per the instructions in the "Building and
682 Installing" section. Finally, rebuild OVS as per step 3 in the "Building
683 and Installing" section - OVS will detect that DPDK has vhost-cuse libraries
684 compiled and in turn will enable support for it in the switch and disable
687 2. Insert the Cuse module:
691 3. Build and insert the `eventfd_link` module:
694 cd $DPDK_DIR/lib/librte_vhost/eventfd_link/
696 insmod $DPDK_DIR/lib/librte_vhost/eventfd_link.ko
699 4. QEMU version v2.1.0+
701 vhost-cuse will work with QEMU v2.1.0 and above, however it is recommended to
702 use v2.2.0 if providing your VM with memory greater than 1GB due to potential
703 issues with memory mapping larger areas.
704 Note: QEMU v1.6.2 will also work, with slightly different command line parameters,
705 which are specified later in this document.
707 Adding DPDK vhost-cuse ports to the Switch:
708 --------------------------------------
710 Following the steps above to create a bridge, you can now add DPDK vhost-cuse
711 as a port to the vswitch. Unlike DPDK ring ports, DPDK vhost-cuse ports can have
714 - For vhost-cuse, the name of the port type is `dpdkvhostcuse`
717 ovs-vsctl add-port br0 vhost-cuse-1 -- set Interface vhost-cuse-1
721 When attaching vhost-cuse ports to QEMU, the name provided during the
722 add-port operation must match the ifname parameter on the QEMU command
723 line. More instructions on this can be found in the next section.
725 DPDK vhost-cuse VM configuration:
726 ---------------------------------
728 vhost-cuse ports use a Linux* character device to communicate with QEMU.
729 By default it is set to `/dev/vhost-net`. It is possible to reuse this
730 standard device for DPDK vhost, which makes setup a little simpler but it
731 is better practice to specify an alternative character device in order to
732 avoid any conflicts if kernel vhost is to be used in parallel.
734 1. This step is only needed if using an alternative character device.
736 The new character device filename must be specified in the ovsdb:
738 `./utilities/ovs-vsctl --no-wait set Open_vSwitch . \
739 other_config:cuse-dev-name=my-vhost-net`
741 In the example above, the character device to be used will be
744 2. This step is only needed if reusing the standard character device. It will
745 conflict with the kernel vhost character device so the user must first
748 `rm -rf /dev/vhost-net`
750 3a. Configure virtio-net adaptors:
751 The following parameters must be passed to the QEMU binary:
754 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on
755 -device virtio-net-pci,netdev=net1,mac=<mac>
758 Repeat the above parameters for multiple devices.
760 The DPDK vhost library will negiotiate its own features, so they
761 need not be passed in as command line params. Note that as offloads are
762 disabled this is the equivalent of setting:
764 `csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off`
766 3b. If using an alternative character device. It must be also explicitly
767 passed to QEMU using the `vhostfd` argument:
770 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on,
772 -device virtio-net-pci,netdev=net1,mac=<mac>
775 The open file descriptor must be passed to QEMU running as a child
776 process. This could be done with a simple python script.
780 fd = os.open("/dev/usvhost", os.O_RDWR)
781 subprocess.call("qemu-system-x86_64 .... -netdev tap,id=vhostnet0,\
782 vhost=on,vhostfd=" + fd +"...", shell=True)
784 Alternatively the `qemu-wrap.py` script can be used to automate the
785 requirements specified above and can be used in conjunction with libvirt if
786 desired. See the "DPDK vhost VM configuration with QEMU wrapper" section
789 4. Configure huge pages:
790 QEMU must allocate the VM's memory on hugetlbfs. Vhost ports access a
791 virtio-net device's virtual rings and packet buffers mapping the VM's
792 physical memory on hugetlbfs. To enable vhost-ports to map the VM's
793 memory into their process address space, pass the following parameters
796 `-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,
797 share=on -numa node,memdev=mem -mem-prealloc`
799 Note: For use with an earlier QEMU version such as v1.6.2, use the
800 following to configure hugepages instead:
802 `-mem-path /dev/hugepages -mem-prealloc`
804 DPDK vhost-cuse VM configuration with QEMU wrapper:
805 ---------------------------------------------------
806 The QEMU wrapper script automatically detects and calls QEMU with the
807 necessary parameters. It performs the following actions:
809 * Automatically detects the location of the hugetlbfs and inserts this
810 into the command line parameters.
811 * Automatically open file descriptors for each virtio-net device and
812 inserts this into the command line parameters.
813 * Calls QEMU passing both the command line parameters passed to the
814 script itself and those it has auto-detected.
816 Before use, you **must** edit the configuration parameters section of the
817 script to point to the correct emulator location and set additional
818 settings. Of these settings, `emul_path` and `us_vhost_path` **must** be
819 set. All other settings are optional.
821 To use directly from the command line simply pass the wrapper some of the
822 QEMU parameters: it will configure the rest. For example:
825 qemu-wrap.py -cpu host -boot c -hda <disk image> -m 4096 -smp 4
826 --enable-kvm -nographic -vnc none -net none -netdev tap,id=net1,
827 script=no,downscript=no,ifname=if1,vhost=on -device virtio-net-pci,
828 netdev=net1,mac=00:00:00:00:00:01
831 DPDK vhost-cuse VM configuration with libvirt:
832 ----------------------------------------------
834 If you are using libvirt, you must enable libvirt to access the character
835 device by adding it to controllers cgroup for libvirtd using the following
838 1. In `/etc/libvirt/qemu.conf` add/edit the following lines:
841 1) clear_emulator_capabilities = 0
844 4) cgroup_device_acl = [
845 "/dev/null", "/dev/full", "/dev/zero",
846 "/dev/random", "/dev/urandom",
847 "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
848 "/dev/rtc", "/dev/hpet", "/dev/net/tun",
849 "/dev/<my-vhost-device>",
853 <my-vhost-device> refers to "vhost-net" if using the `/dev/vhost-net`
854 device. If you have specificed a different name in the database
855 using the "other_config:cuse-dev-name" parameter, please specify that
858 2. Disable SELinux or set to permissive mode
860 3. Restart the libvirtd process
861 For example, on Fedora:
863 `systemctl restart libvirtd.service`
865 After successfully editing the configuration, you may launch your
866 vhost-enabled VM. The XML describing the VM can be configured like so
867 within the <qemu:commandline> section:
869 1. Set up shared hugepages:
872 <qemu:arg value='-object'/>
873 <qemu:arg value='memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on'/>
874 <qemu:arg value='-numa'/>
875 <qemu:arg value='node,memdev=mem'/>
876 <qemu:arg value='-mem-prealloc'/>
879 2. Set up your tap devices:
882 <qemu:arg value='-netdev'/>
883 <qemu:arg value='type=tap,id=net1,script=no,downscript=no,ifname=vhost0,vhost=on'/>
884 <qemu:arg value='-device'/>
885 <qemu:arg value='virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01'/>
888 Repeat for as many devices as are desired, modifying the id, ifname
889 and mac as necessary.
891 Again, if you are using an alternative character device (other than
892 `/dev/vhost-net`), please specify the file descriptor like so:
894 `<qemu:arg value='type=tap,id=net3,script=no,downscript=no,ifname=vhost0,vhost=on,vhostfd=<open_fd>'/>`
896 Where <open_fd> refers to the open file descriptor of the character device.
897 Instructions of how to retrieve the file descriptor can be found in the
898 "DPDK vhost VM configuration" section.
899 Alternatively, the process is automated with the qemu-wrap.py script,
900 detailed in the next section.
902 Now you may launch your VM using virt-manager, or like so:
904 `virsh create my_vhost_vm.xml`
906 DPDK vhost-cuse VM configuration with libvirt and QEMU wrapper:
907 ----------------------------------------------------------
909 To use the qemu-wrapper script in conjuntion with libvirt, follow the
910 steps in the previous section before proceeding with the following steps:
912 1. Place `qemu-wrap.py` in libvirtd's binary search PATH ($PATH)
913 Ideally in the same directory that the QEMU binary is located.
915 2. Ensure that the script has the same owner/group and file permissions
918 3. Update the VM xml file using "virsh edit VM.xml"
920 1. Set the VM to use the launch script.
921 Set the emulator path contained in the `<emulator><emulator/>` tags.
922 For example, replace:
924 `<emulator>/usr/bin/qemu-kvm<emulator/>`
928 `<emulator>/usr/bin/qemu-wrap.py<emulator/>`
930 4. Edit the Configuration Parameters section of the script to point to
931 the correct emulator location and set any additional options. If you are
932 using a alternative character device name, please set "us_vhost_path" to the
933 location of that device. The script will automatically detect and insert
934 the correct "vhostfd" value in the QEMU command line arguments.
936 5. Use virt-manager to launch the VM
938 Running ovs-vswitchd with DPDK backend inside a VM
939 --------------------------------------------------
941 Please note that additional configuration is required if you want to run
942 ovs-vswitchd with DPDK backend inside a QEMU virtual machine. Ovs-vswitchd
943 creates separate DPDK TX queues for each CPU core available. This operation
944 fails inside QEMU virtual machine because, by default, VirtIO NIC provided
945 to the guest is configured to support only single TX queue and single RX
946 queue. To change this behavior, you need to turn on 'mq' (multiqueue)
947 property of all virtio-net-pci devices emulated by QEMU and used by DPDK.
948 You may do it manually (by changing QEMU command line) or, if you use Libvirt,
949 by adding the following string:
951 `<driver name='vhost' queues='N'/>`
953 to <interface> sections of all network devices used by DPDK. Parameter 'N'
954 determines how many queues can be used by the guest.
959 - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue.
960 - Currently DPDK port does not make use any offload functionality.
961 - DPDK-vHost support works with 1G huge pages.
964 - If you run Open vSwitch with smaller page sizes (e.g. 2MB), you may be
965 unable to share any rings or mempools with a virtual machine.
966 This is because the current implementation of ivshmem works by sharing
967 a single 1GB huge page from the host operating system to any guest
968 operating system through the Qemu ivshmem device. When using smaller
969 page sizes, multiple pages may be required to hold the ring descriptors
970 and buffer pools. The Qemu ivshmem device does not allow you to share
971 multiple file descriptors to the guest operating system. However, if you
972 want to share dpdkr rings with other processes on the host, you can do
973 this with smaller page sizes.
975 Platform and Network Interface:
976 - By default with DPDK 16.04, a maximum of 64 TX queues can be used with an
977 Intel XL710 Network Interface on a platform with more than 64 logical
978 cores. If a user attempts to add an XL710 interface as a DPDK port type to
979 a system as described above, an error will be reported that initialization
980 failed for the 65th queue. OVS will then roll back to the previous
981 successful queue initialization and use that value as the total number of
982 TX queues available with queue locking. If a user wishes to use more than
983 64 queues and avoid locking, then the
984 `CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF` config parameter in DPDK must be
985 increased to the desired number of queues. Both DPDK and OVS must be
986 recompiled for this change to take effect.
991 Please report problems to bugs@openvswitch.org.
993 [INSTALL.userspace.md]:INSTALL.userspace.md
994 [INSTALL.md]:INSTALL.md
995 [DPDK Linux GSG]: http://www.dpdk.org/doc/guides/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-igb-uioor-vfio-modules
996 [DPDK Docs]: http://dpdk.org/doc