1 Using Open vSwitch with DPDK
2 ============================
4 Open vSwitch can use Intel(R) DPDK lib to operate entirely in
5 userspace. This file explains how to install and use Open vSwitch in
8 The DPDK support of Open vSwitch is considered experimental.
9 It has not been thoroughly tested.
11 This version of Open vSwitch should be built manually with `configure`
14 OVS needs a system with 1GB hugepages support.
16 Building and Installing:
17 ------------------------
20 Optional (if building with vhost-cuse): `fuse`, `fuse-devel` (`libfuse-dev`
23 1. Configure build & install DPDK:
27 export DPDK_DIR=/usr/src/dpdk-2.1
31 2. Update `config/common_linuxapp` so that DPDK generate single lib file.
32 (modification also required for IVSHMEM build)
34 `CONFIG_RTE_BUILD_COMBINE_LIBS=y`
36 Then run `make install` to build and install the library.
37 For default install without IVSHMEM:
39 `make install T=x86_64-native-linuxapp-gcc`
41 To include IVSHMEM (shared memory):
43 `make install T=x86_64-ivshmem-linuxapp-gcc`
45 For further details refer to http://dpdk.org/
47 2. Configure & build the Linux kernel:
49 Refer to intel-dpdk-getting-started-guide.pdf for understanding
50 DPDK kernel requirement.
52 3. Configure & build OVS:
56 `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/`
60 `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/`
63 cd $(OVS_DIR)/openvswitch
65 ./configure --with-dpdk=$DPDK_BUILD [CFLAGS="-g -O2 -Wno-cast-align"]
69 Note: 'clang' users may specify the '-Wno-cast-align' flag to suppress DPDK cast-align warnings.
71 To have better performance one can enable aggressive compiler optimizations and
72 use the special instructions(popcnt, crc32) that may not be available on all
73 machines. Instead of typing `make`, type:
75 `make CFLAGS='-O3 -march=native'`
77 Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS.
79 Using the DPDK with ovs-vswitchd:
80 ---------------------------------
83 Add the following options to the kernel bootline:
85 `default_hugepagesz=1GB hugepagesz=1G hugepages=1`
87 2. Setup DPDK devices:
89 DPDK devices can be setup using either the VFIO (for DPDK 1.7+) or UIO
90 modules. UIO requires inserting an out of tree driver igb_uio.ko that is
91 available in DPDK. Setup for both methods are described below.
94 1. insert uio.ko: `modprobe uio`
95 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko`
96 3. Bind network device to igb_uio:
97 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1`
101 VFIO needs to be supported in the kernel and the BIOS. More information
102 can be found in the [DPDK Linux GSG].
104 1. Insert vfio-pci.ko: `modprobe vfio-pci`
105 2. Set correct permissions on vfio device: `sudo /usr/bin/chmod a+x /dev/vfio`
106 and: `sudo /usr/bin/chmod 0666 /dev/vfio/*`
107 3. Bind network device to vfio-pci:
108 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=vfio-pci eth1`
110 3. Mount the hugetable filesystem
112 `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`
114 Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup.
116 4. Follow the instructions in [INSTALL.md] to install only the
117 userspace daemons and utilities (via 'make install').
118 1. First time only db creation (or clearing):
121 mkdir -p /usr/local/etc/openvswitch
122 mkdir -p /usr/local/var/run/openvswitch
123 rm /usr/local/etc/openvswitch/conf.db
124 ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
125 /usr/local/share/openvswitch/vswitch.ovsschema
128 2. Start ovsdb-server
131 ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
132 --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
133 --private-key=db:Open_vSwitch,SSL,private_key \
134 --certificate=Open_vSwitch,SSL,certificate \
135 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
138 3. First time after db creation, initialize:
141 ovs-vsctl --no-wait init
146 DPDK configuration arguments can be passed to vswitchd via `--dpdk`
147 argument. This needs to be first argument passed to vswitchd process.
148 dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter
149 for dpdk initialization.
152 export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
153 ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach
156 If allocated more than one GB hugepage (as for IVSHMEM), set amount and
157 use NUMA node 0 memory:
160 ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \
161 -- unix:$DB_SOCK --pidfile --detach
164 6. Add bridge & ports
166 To use ovs-vswitchd with DPDK, create a bridge with datapath_type
167 "netdev" in the configuration database. For example:
169 `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`
171 Now you can add dpdk devices. OVS expects DPDK device names to start with
172 "dpdk" and end with a portid. vswitchd should print (in the log file) the
173 number of dpdk devices found.
176 ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
177 ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
180 Once first DPDK port is added to vswitchd, it creates a Polling thread and
181 polls dpdk device in continuous loop. Therefore CPU utilization
182 for that thread is always 100%.
184 Note: creating bonds of DPDK interfaces is slightly different to creating
185 bonds of system interfaces. For DPDK, the interface type must be explicitly
189 ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 -- set Interface dpdk0 type=dpdk -- set Interface dpdk1 type=dpdk
194 Test flow script across NICs (assuming ovs in /usr/src/ovs):
199 # Move to command directory
200 cd /usr/src/ovs/utilities/
202 # Clear current flows
203 ./ovs-ofctl del-flows br0
205 # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
206 ./ovs-ofctl add-flow br0 in_port=1,action=output:2
207 ./ovs-ofctl add-flow br0 in_port=2,action=output:1
210 8. Performance tuning
212 With pmd multi-threading support, OVS creates one pmd thread for each
213 numa node as default. The pmd thread handles the I/O of all DPDK
214 interfaces on the same numa node. The following two commands can be used
215 to configure the multi-threading behavior.
217 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>`
219 The command above asks for a CPU mask for setting the affinity of pmd
220 threads. A set bit in the mask means a pmd thread is created and pinned
221 to the corresponding CPU core. For more information, please refer to
222 `man ovs-vswitchd.conf.db`
224 `ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<integer>`
226 The command above sets the number of rx queues of each DPDK interface. The
227 rx queues are assigned to pmd threads on the same numa node in round-robin
228 fashion. For more information, please refer to `man ovs-vswitchd.conf.db`
230 Ideally for maximum throughput, the pmd thread should not be scheduled out
231 which temporarily halts its execution. The following affinitization methods
234 Lets pick core 4,6,8,10 for pmd threads to run on. Also assume a dual 8 core
235 sandy bridge system with hyperthreading enabled where CPU1 has cores 0,...,7
236 and 16,...,23 & CPU2 cores 8,...,15 & 24,...,31. (A different cpu
237 configuration could have different core mask requirements).
239 To kernel bootline add core isolation list for cores and associated hype cores
240 (e.g. isolcpus=4,20,6,22,8,24,10,26,). Reboot system for isolation to take
241 effect, restart everything.
243 Configure pmd threads on core 4,6,8,10 using 'pmd-cpu-mask':
245 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=00000550`
247 You should be able to check that pmd threads are pinned to the correct cores
251 top -p `pidof ovs-vswitchd` -H -d1
254 Note, the pmd threads on a numa node are only created if there is at least
255 one DPDK interface from the numa node that has been added to OVS.
257 To understand where most of the time is spent and whether the caches are
258 effective, these commands can be used:
261 ovs-appctl dpif-netdev/pmd-stats-clear #To reset statistics
262 ovs-appctl dpif-netdev/pmd-stats-show
268 Following the steps above to create a bridge, you can now add dpdk rings
269 as a port to the vswitch. OVS will expect the DPDK ring device name to
270 start with dpdkr and end with a portid.
272 `ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr`
274 DPDK rings client test application
276 Included in the test directory is a sample DPDK application for testing
277 the rings. This is from the base dpdk directory and modified to work
278 with the ring naming used within ovs.
280 location tests/ovs_client
285 cd /usr/src/ovs/tests/
286 ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr"
289 In the case of the dpdkr example above the "port id you gave dpdkr" is 0.
291 It is essential to have --proc-type=secondary
293 The application simply receives an mbuf on the receive queue of the
294 ethernet ring and then places that same mbuf on the transmit ring of
295 the ethernet ring. It is a trivial loopback application.
297 DPDK rings in VM (IVSHMEM shared memory communications)
298 -------------------------------------------------------
300 In addition to executing the client in the host, you can execute it within
301 a guest VM. To do so you will need a patched qemu. You can download the
302 patch and getting started guide at :
304 https://01.org/packet-processing/downloads
306 A general rule of thumb for better performance is that the client
307 application should not be assigned the same dpdk core mask "-c" as
313 DPDK 2.1 supports two types of vhost:
318 Whatever type of vhost is enabled in the DPDK build specified, is the type
319 that will be enabled in OVS. By default, vhost-user is enabled in DPDK.
320 Therefore, unless vhost-cuse has been enabled in DPDK, vhost-user ports
321 will be enabled in OVS.
322 Please note that support for vhost-cuse is intended to be deprecated in OVS
328 The following sections describe the use of vhost-user 'dpdkvhostuser' ports
331 DPDK vhost-user Prerequisites:
332 -------------------------
334 1. DPDK 2.1 with vhost support enabled as documented in the "Building and
337 2. QEMU version v2.1.0+
339 QEMU v2.1.0 will suffice, but it is recommended to use v2.2.0 if providing
340 your VM with memory greater than 1GB due to potential issues with memory
341 mapping larger areas.
343 Adding DPDK vhost-user ports to the Switch:
344 --------------------------------------
346 Following the steps above to create a bridge, you can now add DPDK vhost-user
347 as a port to the vswitch. Unlike DPDK ring ports, DPDK vhost-user ports can
348 have arbitrary names.
350 - For vhost-user, the name of the port type is `dpdkvhostuser`
353 ovs-vsctl add-port br0 vhost-user-1 -- set Interface vhost-user-1
357 This action creates a socket located at
358 `/usr/local/var/run/openvswitch/vhost-user-1`, which you must provide
359 to your VM on the QEMU command line. More instructions on this can be
360 found in the next section "DPDK vhost-user VM configuration"
361 Note: If you wish for the vhost-user sockets to be created in a
362 directory other than `/usr/local/var/run/openvswitch`, you may specify
363 another location on the ovs-vswitchd command line like so:
365 `./vswitchd/ovs-vswitchd --dpdk -vhost_sock_dir /my-dir -c 0x1 ...`
367 DPDK vhost-user VM configuration:
368 ---------------------------------
369 Follow the steps below to attach vhost-user port(s) to a VM.
371 1. Configure sockets.
372 Pass the following parameters to QEMU to attach a vhost-user device:
375 -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user-1
376 -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce
377 -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1
380 ...where vhost-user-1 is the name of the vhost-user port added
382 Repeat the above parameters for multiple devices, changing the
383 chardev path and id as necessary. Note that a separate and different
384 chardev path needs to be specified for each vhost-user device. For
385 example you have a second vhost-user port named 'vhost-user-2', you
386 append your QEMU command line with an additional set of parameters:
389 -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2
390 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce
391 -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2
394 2. Configure huge pages.
395 QEMU must allocate the VM's memory on hugetlbfs. vhost-user ports access
396 a virtio-net device's virtual rings and packet buffers mapping the VM's
397 physical memory on hugetlbfs. To enable vhost-user ports to map the VM's
398 memory into their process address space, pass the following paramters
402 -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,
404 -numa node,memdev=mem -mem-prealloc
410 The following sections describe the use of vhost-cuse 'dpdkvhostcuse' ports
413 DPDK vhost-cuse Prerequisites:
414 -------------------------
416 1. DPDK 2.1 with vhost support enabled as documented in the "Building and
418 As an additional step, you must enable vhost-cuse in DPDK by setting the
419 following additional flag in `config/common_linuxapp`:
421 `CONFIG_RTE_LIBRTE_VHOST_USER=n`
423 Following this, rebuild DPDK as per the instructions in the "Building and
424 Installing" section. Finally, rebuild OVS as per step 3 in the "Building
425 and Installing" section - OVS will detect that DPDK has vhost-cuse libraries
426 compiled and in turn will enable support for it in the switch and disable
429 2. Insert the Cuse module:
433 3. Build and insert the `eventfd_link` module:
436 cd $DPDK_DIR/lib/librte_vhost/eventfd_link/
438 insmod $DPDK_DIR/lib/librte_vhost/eventfd_link.ko
441 4. QEMU version v2.1.0+
443 vhost-cuse will work with QEMU v2.1.0 and above, however it is recommended to
444 use v2.2.0 if providing your VM with memory greater than 1GB due to potential
445 issues with memory mapping larger areas.
446 Note: QEMU v1.6.2 will also work, with slightly different command line parameters,
447 which are specified later in this document.
449 Adding DPDK vhost-cuse ports to the Switch:
450 --------------------------------------
452 Following the steps above to create a bridge, you can now add DPDK vhost-cuse
453 as a port to the vswitch. Unlike DPDK ring ports, DPDK vhost-cuse ports can have
456 - For vhost-cuse, the name of the port type is `dpdkvhostcuse`
459 ovs-vsctl add-port br0 vhost-cuse-1 -- set Interface vhost-cuse-1
463 When attaching vhost-cuse ports to QEMU, the name provided during the
464 add-port operation must match the ifname parameter on the QEMU command
465 line. More instructions on this can be found in the next section.
467 DPDK vhost-cuse VM configuration:
468 ---------------------------------
470 vhost-cuse ports use a Linux* character device to communicate with QEMU.
471 By default it is set to `/dev/vhost-net`. It is possible to reuse this
472 standard device for DPDK vhost, which makes setup a little simpler but it
473 is better practice to specify an alternative character device in order to
474 avoid any conflicts if kernel vhost is to be used in parallel.
476 1. This step is only needed if using an alternative character device.
478 The new character device filename must be specified on the vswitchd
481 `./vswitchd/ovs-vswitchd --dpdk --cuse_dev_name my-vhost-net -c 0x1 ...`
483 Note that the `--cuse_dev_name` argument and associated string must be the first
484 arguments after `--dpdk` and come before the EAL arguments. In the example
485 above, the character device to be used will be `/dev/my-vhost-net`.
487 2. This step is only needed if reusing the standard character device. It will
488 conflict with the kernel vhost character device so the user must first
491 `rm -rf /dev/vhost-net`
493 3a. Configure virtio-net adaptors:
494 The following parameters must be passed to the QEMU binary:
497 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on
498 -device virtio-net-pci,netdev=net1,mac=<mac>
501 Repeat the above parameters for multiple devices.
503 The DPDK vhost library will negiotiate its own features, so they
504 need not be passed in as command line params. Note that as offloads are
505 disabled this is the equivalent of setting:
507 `csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off`
509 3b. If using an alternative character device. It must be also explicitly
510 passed to QEMU using the `vhostfd` argument:
513 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on,
515 -device virtio-net-pci,netdev=net1,mac=<mac>
518 The open file descriptor must be passed to QEMU running as a child
519 process. This could be done with a simple python script.
523 fd = os.open("/dev/usvhost", os.O_RDWR)
524 subprocess.call("qemu-system-x86_64 .... -netdev tap,id=vhostnet0,\
525 vhost=on,vhostfd=" + fd +"...", shell=True)
527 Alternatively the `qemu-wrap.py` script can be used to automate the
528 requirements specified above and can be used in conjunction with libvirt if
529 desired. See the "DPDK vhost VM configuration with QEMU wrapper" section
532 4. Configure huge pages:
533 QEMU must allocate the VM's memory on hugetlbfs. Vhost ports access a
534 virtio-net device's virtual rings and packet buffers mapping the VM's
535 physical memory on hugetlbfs. To enable vhost-ports to map the VM's
536 memory into their process address space, pass the following parameters
539 `-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,
540 share=on -numa node,memdev=mem -mem-prealloc`
542 Note: For use with an earlier QEMU version such as v1.6.2, use the
543 following to configure hugepages instead:
545 `-mem-path /dev/hugepages -mem-prealloc`
547 DPDK vhost-cuse VM configuration with QEMU wrapper:
548 ---------------------------------------------------
549 The QEMU wrapper script automatically detects and calls QEMU with the
550 necessary parameters. It performs the following actions:
552 * Automatically detects the location of the hugetlbfs and inserts this
553 into the command line parameters.
554 * Automatically open file descriptors for each virtio-net device and
555 inserts this into the command line parameters.
556 * Calls QEMU passing both the command line parameters passed to the
557 script itself and those it has auto-detected.
559 Before use, you **must** edit the configuration parameters section of the
560 script to point to the correct emulator location and set additional
561 settings. Of these settings, `emul_path` and `us_vhost_path` **must** be
562 set. All other settings are optional.
564 To use directly from the command line simply pass the wrapper some of the
565 QEMU parameters: it will configure the rest. For example:
568 qemu-wrap.py -cpu host -boot c -hda <disk image> -m 4096 -smp 4
569 --enable-kvm -nographic -vnc none -net none -netdev tap,id=net1,
570 script=no,downscript=no,ifname=if1,vhost=on -device virtio-net-pci,
571 netdev=net1,mac=00:00:00:00:00:01
574 DPDK vhost-cuse VM configuration with libvirt:
575 ----------------------------------------------
577 If you are using libvirt, you must enable libvirt to access the character
578 device by adding it to controllers cgroup for libvirtd using the following
581 1. In `/etc/libvirt/qemu.conf` add/edit the following lines:
584 1) clear_emulator_capabilities = 0
587 4) cgroup_device_acl = [
588 "/dev/null", "/dev/full", "/dev/zero",
589 "/dev/random", "/dev/urandom",
590 "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
591 "/dev/rtc", "/dev/hpet", "/dev/net/tun",
592 "/dev/<my-vhost-device>",
596 <my-vhost-device> refers to "vhost-net" if using the `/dev/vhost-net`
597 device. If you have specificed a different name on the ovs-vswitchd
598 commandline using the "--cuse_dev_name" parameter, please specify that
601 2. Disable SELinux or set to permissive mode
603 3. Restart the libvirtd process
604 For example, on Fedora:
606 `systemctl restart libvirtd.service`
608 After successfully editing the configuration, you may launch your
609 vhost-enabled VM. The XML describing the VM can be configured like so
610 within the <qemu:commandline> section:
612 1. Set up shared hugepages:
615 <qemu:arg value='-object'/>
616 <qemu:arg value='memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on'/>
617 <qemu:arg value='-numa'/>
618 <qemu:arg value='node,memdev=mem'/>
619 <qemu:arg value='-mem-prealloc'/>
622 2. Set up your tap devices:
625 <qemu:arg value='-netdev'/>
626 <qemu:arg value='type=tap,id=net1,script=no,downscript=no,ifname=vhost0,vhost=on'/>
627 <qemu:arg value='-device'/>
628 <qemu:arg value='virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01'/>
631 Repeat for as many devices as are desired, modifying the id, ifname
632 and mac as necessary.
634 Again, if you are using an alternative character device (other than
635 `/dev/vhost-net`), please specify the file descriptor like so:
637 `<qemu:arg value='type=tap,id=net3,script=no,downscript=no,ifname=vhost0,vhost=on,vhostfd=<open_fd>'/>`
639 Where <open_fd> refers to the open file descriptor of the character device.
640 Instructions of how to retrieve the file descriptor can be found in the
641 "DPDK vhost VM configuration" section.
642 Alternatively, the process is automated with the qemu-wrap.py script,
643 detailed in the next section.
645 Now you may launch your VM using virt-manager, or like so:
647 `virsh create my_vhost_vm.xml`
649 DPDK vhost-cuse VM configuration with libvirt and QEMU wrapper:
650 ----------------------------------------------------------
652 To use the qemu-wrapper script in conjuntion with libvirt, follow the
653 steps in the previous section before proceeding with the following steps:
655 1. Place `qemu-wrap.py` in libvirtd's binary search PATH ($PATH)
656 Ideally in the same directory that the QEMU binary is located.
658 2. Ensure that the script has the same owner/group and file permissions
661 3. Update the VM xml file using "virsh edit VM.xml"
663 1. Set the VM to use the launch script.
664 Set the emulator path contained in the `<emulator><emulator/>` tags.
665 For example, replace:
667 `<emulator>/usr/bin/qemu-kvm<emulator/>`
671 `<emulator>/usr/bin/qemu-wrap.py<emulator/>`
673 4. Edit the Configuration Parameters section of the script to point to
674 the correct emulator location and set any additional options. If you are
675 using a alternative character device name, please set "us_vhost_path" to the
676 location of that device. The script will automatically detect and insert
677 the correct "vhostfd" value in the QEMU command line arguments.
679 5. Use virt-manager to launch the VM
681 Running ovs-vswitchd with DPDK backend inside a VM
682 --------------------------------------------------
684 Please note that additional configuration is required if you want to run
685 ovs-vswitchd with DPDK backend inside a QEMU virtual machine. Ovs-vswitchd
686 creates separate DPDK TX queues for each CPU core available. This operation
687 fails inside QEMU virtual machine because, by default, VirtIO NIC provided
688 to the guest is configured to support only single TX queue and single RX
689 queue. To change this behavior, you need to turn on 'mq' (multiqueue)
690 property of all virtio-net-pci devices emulated by QEMU and used by DPDK.
691 You may do it manually (by changing QEMU command line) or, if you use Libvirt,
692 by adding the following string:
694 `<driver name='vhost' queues='N'/>`
696 to <interface> sections of all network devices used by DPDK. Parameter 'N'
697 determines how many queues can be used by the guest.
702 - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue.
703 - Currently DPDK port does not make use any offload functionality.
704 - DPDK-vHost support works with 1G huge pages.
707 - If you run Open vSwitch with smaller page sizes (e.g. 2MB), you may be
708 unable to share any rings or mempools with a virtual machine.
709 This is because the current implementation of ivshmem works by sharing
710 a single 1GB huge page from the host operating system to any guest
711 operating system through the Qemu ivshmem device. When using smaller
712 page sizes, multiple pages may be required to hold the ring descriptors
713 and buffer pools. The Qemu ivshmem device does not allow you to share
714 multiple file descriptors to the guest operating system. However, if you
715 want to share dpdkr rings with other processes on the host, you can do
716 this with smaller page sizes.
718 Platform and Network Interface:
719 - Currently it is not possible to use an Intel XL710 Network Interface as a
720 DPDK port type on a platform with more than 64 logical cores. This is
721 related to how DPDK reports the number of TX queues that may be used by
722 a DPDK application with an XL710. The maximum number of TX queues supported
723 by a DPDK application for an XL710 is 64. If a user attempts to add an
724 XL710 interface as a DPDK port type to a system as described above the
725 port addition will fail as OVS will attempt to initialize a TX queue greater
726 than 64. This issue is expected to be resolved in a future DPDK release.
727 As a workaround a user can disable hyper-threading to reduce the overall
728 core count of the system to be less than or equal to 64 when using an XL710
734 Please report problems to bugs@openvswitch.org.
736 [INSTALL.userspace.md]:INSTALL.userspace.md
737 [INSTALL.md]:INSTALL.md
738 [DPDK Linux GSG]: http://www.dpdk.org/doc/guides/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-igb-uioor-vfio-modules
739 [DPDK Docs]: http://dpdk.org/doc