polls dpdk device in continuous loop. Therefore CPU utilization
for that thread is always 100%.
+ Note: creating bonds of DPDK interfaces is slightly different to creating
+ bonds of system interfaces. For DPDK, the interface type must be explicitly
+ set, for example:
+
+ ```
+ ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 -- set Interface dpdk0 type=dpdk -- set Interface dpdk1 type=dpdk
+ ```
+
7. Add test flows
Test flow script across NICs (assuming ovs in /usr/src/ovs):
Note, the pmd threads on a numa node are only created if there is at least
one DPDK interface from the numa node that has been added to OVS.
- Note, core 0 is always reserved from non-pmd threads and should never be set
- in the cpu mask.
-
To understand where most of the time is spent and whether the caches are
effective, these commands can be used:
5. Use virt-manager to launch the VM
+Running ovs-vswitchd with DPDK backend inside a VM
+--------------------------------------------------
+
+Please note that additional configuration is required if you want to run
+ovs-vswitchd with DPDK backend inside a QEMU virtual machine. Ovs-vswitchd
+creates separate DPDK TX queues for each CPU core available. This operation
+fails inside QEMU virtual machine because, by default, VirtIO NIC provided
+to the guest is configured to support only single TX queue and single RX
+queue. To change this behavior, you need to turn on 'mq' (multiqueue)
+property of all virtio-net-pci devices emulated by QEMU and used by DPDK.
+You may do it manually (by changing QEMU command line) or, if you use Libvirt,
+by adding the following string:
+
+`<driver name='vhost' queues='N'/>`
+
+to <interface> sections of all network devices used by DPDK. Parameter 'N'
+determines how many queues can be used by the guest.
+
Restrictions:
-------------