1 Using Open vSwitch with DPDK
2 ============================
4 Open vSwitch can use Intel(R) DPDK lib to operate entirely in
5 userspace. This file explains how to install and use Open vSwitch in
8 The DPDK support of Open vSwitch is considered experimental.
9 It has not been thoroughly tested.
11 This version of Open vSwitch should be built manually with `configure`
14 OVS needs a system with 1GB hugepages support.
16 Building and Installing:
17 ------------------------
19 Required DPDK 2.0, `fuse`, `fuse-devel` (`libfuse-dev` on Debian/Ubuntu)
21 1. Configure build & install DPDK:
25 export DPDK_DIR=/usr/src/dpdk-2.0
29 2. Update `config/common_linuxapp` so that DPDK generate single lib file.
30 (modification also required for IVSHMEM build)
32 `CONFIG_RTE_BUILD_COMBINE_LIBS=y`
34 Update `config/common_linuxapp` so that DPDK is built with vhost
35 libraries; currently, OVS only supports vhost-cuse, so DPDK vhost-user
36 libraries should be explicitly turned off (they are enabled by default
39 `CONFIG_RTE_LIBRTE_VHOST=y`
40 `CONFIG_RTE_LIBRTE_VHOST_USER=n`
42 Then run `make install` to build and install the library.
43 For default install without IVSHMEM:
45 `make install T=x86_64-native-linuxapp-gcc`
47 To include IVSHMEM (shared memory):
49 `make install T=x86_64-ivshmem-linuxapp-gcc`
51 For further details refer to http://dpdk.org/
53 2. Configure & build the Linux kernel:
55 Refer to intel-dpdk-getting-started-guide.pdf for understanding
56 DPDK kernel requirement.
58 3. Configure & build OVS:
62 `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/`
66 `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/`
69 cd $(OVS_DIR)/openvswitch
71 ./configure --with-dpdk=$DPDK_BUILD [CFLAGS="-g -O2 -Wno-cast-align"]
75 Note: 'clang' users may specify the '-Wno-cast-align' flag to suppress DPDK cast-align warnings.
77 To have better performance one can enable aggressive compiler optimizations and
78 use the special instructions(popcnt, crc32) that may not be available on all
79 machines. Instead of typing `make`, type:
81 `make CFLAGS='-O3 -march=native'`
83 Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS.
85 Using the DPDK with ovs-vswitchd:
86 ---------------------------------
89 Add the following options to the kernel bootline:
91 `default_hugepagesz=1GB hugepagesz=1G hugepages=1`
93 2. Setup DPDK devices:
95 DPDK devices can be setup using either the VFIO (for DPDK 1.7+) or UIO
96 modules. UIO requires inserting an out of tree driver igb_uio.ko that is
97 available in DPDK. Setup for both methods are described below.
100 1. insert uio.ko: `modprobe uio`
101 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko`
102 3. Bind network device to igb_uio:
103 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1`
107 VFIO needs to be supported in the kernel and the BIOS. More information
108 can be found in the [DPDK Linux GSG].
110 1. Insert vfio-pci.ko: `modprobe vfio-pci`
111 2. Set correct permissions on vfio device: `sudo /usr/bin/chmod a+x /dev/vfio`
112 and: `sudo /usr/bin/chmod 0666 /dev/vfio/*`
113 3. Bind network device to vfio-pci:
114 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=vfio-pci eth1`
116 3. Mount the hugetable filsystem
118 `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`
120 Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup.
122 4. Follow the instructions in [INSTALL.md] to install only the
123 userspace daemons and utilities (via 'make install').
124 1. First time only db creation (or clearing):
127 mkdir -p /usr/local/etc/openvswitch
128 mkdir -p /usr/local/var/run/openvswitch
129 rm /usr/local/etc/openvswitch/conf.db
130 ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
131 /usr/local/share/openvswitch/vswitch.ovsschema
134 2. Start ovsdb-server
137 ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
138 --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
139 --private-key=db:Open_vSwitch,SSL,private_key \
140 --certificate=Open_vSwitch,SSL,certificate \
141 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
144 3. First time after db creation, initialize:
147 ovs-vsctl --no-wait init
152 DPDK configuration arguments can be passed to vswitchd via `--dpdk`
153 argument. This needs to be first argument passed to vswitchd process.
154 dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter
155 for dpdk initialization.
158 export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
159 ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach
162 If allocated more than one GB hugepage (as for IVSHMEM), set amount and
163 use NUMA node 0 memory:
166 ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \
167 -- unix:$DB_SOCK --pidfile --detach
170 6. Add bridge & ports
172 To use ovs-vswitchd with DPDK, create a bridge with datapath_type
173 "netdev" in the configuration database. For example:
175 `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`
177 Now you can add dpdk devices. OVS expect DPDK device name start with dpdk
178 and end with portid. vswitchd should print (in the log file) the number
179 of dpdk devices found.
182 ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
183 ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
186 Once first DPDK port is added to vswitchd, it creates a Polling thread and
187 polls dpdk device in continuous loop. Therefore CPU utilization
188 for that thread is always 100%.
192 Test flow script across NICs (assuming ovs in /usr/src/ovs):
197 # Move to command directory
198 cd /usr/src/ovs/utilities/
200 # Clear current flows
201 ./ovs-ofctl del-flows br0
203 # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
204 ./ovs-ofctl add-flow br0 in_port=1,action=output:2
205 ./ovs-ofctl add-flow br0 in_port=2,action=output:1
208 8. Performance tuning
210 With pmd multi-threading support, OVS creates one pmd thread for each
211 numa node as default. The pmd thread handles the I/O of all DPDK
212 interfaces on the same numa node. The following two commands can be used
213 to configure the multi-threading behavior.
215 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>`
217 The command above asks for a CPU mask for setting the affinity of pmd
218 threads. A set bit in the mask means a pmd thread is created and pinned
219 to the corresponding CPU core. For more information, please refer to
220 `man ovs-vswitchd.conf.db`
222 `ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<integer>`
224 The command above sets the number of rx queues of each DPDK interface. The
225 rx queues are assigned to pmd threads on the same numa node in round-robin
226 fashion. For more information, please refer to `man ovs-vswitchd.conf.db`
228 Ideally for maximum throughput, the pmd thread should not be scheduled out
229 which temporarily halts its execution. The following affinitization methods
232 Lets pick core 4,6,8,10 for pmd threads to run on. Also assume a dual 8 core
233 sandy bridge system with hyperthreading enabled where CPU1 has cores 0,...,7
234 and 16,...,23 & CPU2 cores 8,...,15 & 24,...,31. (A different cpu
235 configuration could have different core mask requirements).
237 To kernel bootline add core isolation list for cores and associated hype cores
238 (e.g. isolcpus=4,20,6,22,8,24,10,26,). Reboot system for isolation to take
239 effect, restart everything.
241 Configure pmd threads on core 4,6,8,10 using 'pmd-cpu-mask':
243 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=00000550`
245 You should be able to check that pmd threads are pinned to the correct cores
249 top -p `pidof ovs-vswitchd` -H -d1
252 Note, the pmd threads on a numa node are only created if there is at least
253 one DPDK interface from the numa node that has been added to OVS.
255 Note, core 0 is always reserved from non-pmd threads and should never be set
258 To understand where most of the time is spent and whether the caches are
259 effective, these commands can be used:
262 ovs-appctl dpif-netdev/pmd-stats-clear #To reset statistics
263 ovs-appctl dpif-netdev/pmd-stats-show
269 Following the steps above to create a bridge, you can now add dpdk rings
270 as a port to the vswitch. OVS will expect the DPDK ring device name to
271 start with dpdkr and end with a portid.
273 `ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr`
275 DPDK rings client test application
277 Included in the test directory is a sample DPDK application for testing
278 the rings. This is from the base dpdk directory and modified to work
279 with the ring naming used within ovs.
281 location tests/ovs_client
286 cd /usr/src/ovs/tests/
287 ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr"
290 In the case of the dpdkr example above the "port id you gave dpdkr" is 0.
292 It is essential to have --proc-type=secondary
294 The application simply receives an mbuf on the receive queue of the
295 ethernet ring and then places that same mbuf on the transmit ring of
296 the ethernet ring. It is a trivial loopback application.
298 DPDK rings in VM (IVSHMEM shared memory communications)
299 -------------------------------------------------------
301 In addition to executing the client in the host, you can execute it within
302 a guest VM. To do so you will need a patched qemu. You can download the
303 patch and getting started guide at :
305 https://01.org/packet-processing/downloads
307 A general rule of thumb for better performance is that the client
308 application should not be assigned the same dpdk core mask "-c" as
314 vhost-cuse is only supported at present i.e. not using the standard QEMU
315 vhost-user interface. It is intended that vhost-user support will be added
316 in future releases when supported in DPDK and that vhost-cuse will eventually
317 be deprecated. See [DPDK Docs] for more info on vhost.
320 1. Insert the Cuse module:
324 2. Build and insert the `eventfd_link` module:
326 `cd $DPDK_DIR/lib/librte_vhost/eventfd_link/`
328 `insmod $DPDK_DIR/lib/librte_vhost/eventfd_link.ko`
330 Following the steps above to create a bridge, you can now add DPDK vhost
331 as a port to the vswitch.
333 `ovs-vsctl add-port br0 dpdkvhost0 -- set Interface dpdkvhost0 type=dpdkvhost`
335 Unlike DPDK ring ports, DPDK vhost ports can have arbitrary names:
337 `ovs-vsctl add-port br0 port123ABC -- set Interface port123ABC type=dpdkvhost`
339 However, please note that when attaching userspace devices to QEMU, the
340 name provided during the add-port operation must match the ifname parameter
341 on the QEMU command line.
344 DPDK vhost VM configuration:
345 ----------------------------
347 vhost ports use a Linux* character device to communicate with QEMU.
348 By default it is set to `/dev/vhost-net`. It is possible to reuse this
349 standard device for DPDK vhost, which makes setup a little simpler but it
350 is better practice to specify an alternative character device in order to
351 avoid any conflicts if kernel vhost is to be used in parallel.
353 1. This step is only needed if using an alternative character device.
355 The new character device filename must be specified on the vswitchd
358 `./vswitchd/ovs-vswitchd --dpdk --cuse_dev_name my-vhost-net -c 0x1 ...`
360 Note that the `--cuse_dev_name` argument and associated string must be the first
361 arguments after `--dpdk` and come before the EAL arguments. In the example
362 above, the character device to be used will be `/dev/my-vhost-net`.
364 2. This step is only needed if reusing the standard character device. It will
365 conflict with the kernel vhost character device so the user must first
368 `rm -rf /dev/vhost-net`
370 3a. Configure virtio-net adaptors:
371 The following parameters must be passed to the QEMU binary:
374 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on
375 -device virtio-net-pci,netdev=net1,mac=<mac>
378 Repeat the above parameters for multiple devices.
380 The DPDK vhost library will negiotiate its own features, so they
381 need not be passed in as command line params. Note that as offloads are
382 disabled this is the equivalent of setting:
384 `csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off`
386 3b. If using an alternative character device. It must be also explicitly
387 passed to QEMU using the `vhostfd` argument:
390 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on,
392 -device virtio-net-pci,netdev=net1,mac=<mac>
395 The open file descriptor must be passed to QEMU running as a child
396 process. This could be done with a simple python script.
400 fd = os.open("/dev/usvhost", os.O_RDWR)
401 subprocess.call("qemu-system-x86_64 .... -netdev tap,id=vhostnet0,\
402 vhost=on,vhostfd=" + fd +"...", shell=True)
404 Alternatively the the `qemu-wrap.py` script can be used to automate the
405 requirements specified above and can be used in conjunction with libvirt if
406 desired. See the "DPDK vhost VM configuration with QEMU wrapper" section
409 4. Configure huge pages:
410 QEMU must allocate the VM's memory on hugetlbfs. Vhost ports access a
411 virtio-net device's virtual rings and packet buffers mapping the VM's
412 physical memory on hugetlbfs. To enable vhost-ports to map the VM's
413 memory into their process address space, pass the following paramters
416 `-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,
417 share=on -numa node,memdev=mem -mem-prealloc`
420 DPDK vhost VM configuration with QEMU wrapper:
421 ----------------------------------------------
423 The QEMU wrapper script automatically detects and calls QEMU with the
424 necessary parameters. It performs the following actions:
426 * Automatically detects the location of the hugetlbfs and inserts this
427 into the command line parameters.
428 * Automatically open file descriptors for each virtio-net device and
429 inserts this into the command line parameters.
430 * Calls QEMU passing both the command line parameters passed to the
431 script itself and those it has auto-detected.
433 Before use, you **must** edit the configuration parameters section of the
434 script to point to the correct emulator location and set additional
435 settings. Of these settings, `emul_path` and `us_vhost_path` **must** be
436 set. All other settings are optional.
438 To use directly from the command line simply pass the wrapper some of the
439 QEMU parameters: it will configure the rest. For example:
442 qemu-wrap.py -cpu host -boot c -hda <disk image> -m 4096 -smp 4
443 --enable-kvm -nographic -vnc none -net none -netdev tap,id=net1,
444 script=no,downscript=no,ifname=if1,vhost=on -device virtio-net-pci,
445 netdev=net1,mac=00:00:00:00:00:01
448 DPDK vhost VM configuration with libvirt:
449 -----------------------------------------
451 If you are using libvirt, you must enable libvirt to access the character
452 device by adding it to controllers cgroup for libvirtd using the following
455 1. In `/etc/libvirt/qemu.conf` add/edit the following lines:
458 1) clear_emulator_capabilities = 0
461 4) cgroup_device_acl = [
462 "/dev/null", "/dev/full", "/dev/zero",
463 "/dev/random", "/dev/urandom",
464 "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
465 "/dev/rtc", "/dev/hpet", "/dev/net/tun",
466 "/dev/<my-vhost-device>",
470 <my-vhost-device> refers to "vhost-net" if using the `/dev/vhost-net`
471 device. If you have specificed a different name on the ovs-vswitchd
472 commandline using the "--cuse_dev_name" parameter, please specify that
475 2. Disable SELinux or set to permissive mode
477 3. Restart the libvirtd process
478 For example, on Fedora:
480 `systemctl restart libvirtd.service`
482 After successfully editing the configuration, you may launch your
483 vhost-enabled VM. The XML describing the VM can be configured like so
484 within the <qemu:commandline> section:
486 1. Set up shared hugepages:
489 <qemu:arg value='-object'/>
490 <qemu:arg value='memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on'/>
491 <qemu:arg value='-numa'/>
492 <qemu:arg value='node,memdev=mem'/>
493 <qemu:arg value='-mem-prealloc'/>
496 2. Set up your tap devices:
499 <qemu:arg value='-netdev'/>
500 <qemu:arg value='type=tap,id=net1,script=no,downscript=no,ifname=vhost0,vhost=on'/>
501 <qemu:arg value='-device'/>
502 <qemu:arg value='virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01'/>
505 Repeat for as many devices as are desired, modifying the id, ifname
506 and mac as necessary.
508 Again, if you are using an alternative character device (other than
509 `/dev/vhost-net`), please specify the file descriptor like so:
511 `<qemu:arg value='type=tap,id=net3,script=no,downscript=no,ifname=vhost0,vhost=on,vhostfd=<open_fd>'/>`
513 Where <open_fd> refers to the open file descriptor of the character device.
514 Instructions of how to retrieve the file descriptor can be found in the
515 "DPDK vhost VM configuration" section.
516 Alternatively, the process is automated with the qemu-wrap.py script,
517 detailed in the next section.
519 Now you may launch your VM using virt-manager, or like so:
521 `virsh create my_vhost_vm.xml`
523 DPDK vhost VM configuration with libvirt and QEMU wrapper:
524 ----------------------------------------------------------
526 To use the qemu-wrapper script in conjuntion with libvirt, follow the
527 steps in the previous section before proceeding with the following steps:
529 1. Place `qemu-wrap.py` in libvirtd's binary search PATH ($PATH)
530 Ideally in the same directory that the QEMU binary is located.
532 2. Ensure that the script has the same owner/group and file permissions
535 3. Update the VM xml file using "virsh edit VM.xml"
537 1. Set the VM to use the launch script.
538 Set the emulator path contained in the `<emulator><emulator/>` tags.
539 For example, replace:
541 `<emulator>/usr/bin/qemu-kvm<emulator/>`
545 `<emulator>/usr/bin/qemu-wrap.py<emulator/>`
547 4. Edit the Configuration Parameters section of the script to point to
548 the correct emulator location and set any additional options. If you are
549 using a alternative character device name, please set "us_vhost_path" to the
550 location of that device. The script will automatically detect and insert
551 the correct "vhostfd" value in the QEMU command line arguements.
553 5. Use virt-manager to launch the VM
558 - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue.
559 - Currently DPDK port does not make use any offload functionality.
560 - DPDK-vHost support works with 1G huge pages.
563 - If you run Open vSwitch with smaller page sizes (e.g. 2MB), you may be
564 unable to share any rings or mempools with a virtual machine.
565 This is because the current implementation of ivshmem works by sharing
566 a single 1GB huge page from the host operating system to any guest
567 operating system through the Qemu ivshmem device. When using smaller
568 page sizes, multiple pages may be required to hold the ring descriptors
569 and buffer pools. The Qemu ivshmem device does not allow you to share
570 multiple file descriptors to the guest operating system. However, if you
571 want to share dpdkr rings with other processes on the host, you can do
572 this with smaller page sizes.
577 Please report problems to bugs@openvswitch.org.
579 [INSTALL.userspace.md]:INSTALL.userspace.md
580 [INSTALL.md]:INSTALL.md
581 [DPDK Linux GSG]: http://www.dpdk.org/doc/guides/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-igb-uioor-vfio-modules
582 [DPDK Docs]: http://dpdk.org/doc