-Using Open vSwitch with DPDK
-============================
+OVS DPDK INSTALL GUIDE
+================================
-Open vSwitch can use Intel(R) DPDK lib to operate entirely in
-userspace. This file explains how to install and use Open vSwitch in
-such a mode.
+## Contents
-The DPDK support of Open vSwitch is considered experimental.
-It has not been thoroughly tested.
+1. [Overview](#overview)
+2. [Building and Installation](#build)
+3. [Setup OVS DPDK datapath](#ovssetup)
+4. [DPDK in the VM](#builddpdk)
+5. [OVS Testcases](#ovstc)
+6. [Limitations ](#ovslimits)
-This version of Open vSwitch should be built manually with `configure`
-and `make`.
+## <a name="overview"></a> 1. Overview
-OVS needs a system with 1GB hugepages support.
+Open vSwitch can use DPDK lib to operate entirely in userspace.
+This file provides information on installation and use of Open vSwitch
+using DPDK datapath. This version of Open vSwitch should be built manually
+with `configure` and `make`.
-Building and Installing:
-------------------------
+The DPDK support of Open vSwitch is considered 'experimental'.
-Required DPDK 1.8.0
+### Prerequisites
-1. Configure build & install DPDK:
- 1. Set `$DPDK_DIR`
+* Required: DPDK 16.04, libnuma
+* Hardware: [DPDK Supported NICs] when physical ports in use
+
+## <a name="build"></a> 2. Building and Installation
+
+### 2.1 Configure & build the Linux kernel
+
+On Linux Distros running kernel version >= 3.0, kernel rebuild is not required
+and only grub cmdline needs to be updated for enabling IOMMU [VFIO support - 3.2].
+For older kernels, check if kernel is built with UIO, HUGETLBFS, PROC_PAGE_MONITOR,
+HPET, HPET_MMAP support.
+
+Detailed system requirements can be found at [DPDK requirements] and also refer to
+advanced install guide [INSTALL.DPDK-ADVANCED.md]
+
+### 2.2 Install DPDK
+ 1. [Download DPDK] and extract the file, for example in to /usr/src
+ and set DPDK_DIR
```
- export DPDK_DIR=/usr/src/dpdk-1.8.0
+ cd /usr/src/
+ wget http://dpdk.org/browse/dpdk/snapshot/dpdk-16.04.zip
+ unzip dpdk-16.04.zip
+
+ export DPDK_DIR=/usr/src/dpdk-16.04
cd $DPDK_DIR
```
- 2. Update `config/common_linuxapp` so that DPDK generate single lib file.
- (modification also required for IVSHMEM build)
+ 2. Configure and Install DPDK
- `CONFIG_RTE_BUILD_COMBINE_LIBS=y`
+ Build and install the DPDK library.
- Then run `make install` to build and isntall the library.
- For default install without IVSHMEM:
+ ```
+ export DPDK_TARGET=x86_64-native-linuxapp-gcc
+ export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
+ make install T=$DPDK_TARGET DESTDIR=install
+ ```
- `make install T=x86_64-native-linuxapp-gcc`
+ Note: For IVSHMEM, Set `export DPDK_TARGET=x86_64-ivshmem-linuxapp-gcc`
- To include IVSHMEM (shared memory):
+### 2.3 Install OVS
+ OVS can be installed using different methods. For OVS to use DPDK datapath,
+ it has to be configured with DPDK support and is done by './configure --with-dpdk'.
+ This section focus on generic recipe that suits most cases and for distribution
+ specific instructions, refer [INSTALL.Fedora.md], [INSTALL.RHEL.md] and
+ [INSTALL.Debian.md].
- `make install T=x86_64-ivshmem-linuxapp-gcc`
+ The OVS sources can be downloaded in different ways and skip this section
+ if already having the correct sources. Otherwise download the correct version using
+ one of the below suggested methods and follow the documentation of that specific
+ version.
- For further details refer to http://dpdk.org/
+ - OVS stable releases can be downloaded in compressed format from [Download OVS]
-2. Configure & build the Linux kernel:
+ ```
+ cd /usr/src
+ wget http://openvswitch.org/releases/openvswitch-<version>.tar.gz
+ tar -zxvf openvswitch-<version>.tar.gz
+ export OVS_DIR=/usr/src/openvswitch-<version>
+ ```
- Refer to intel-dpdk-getting-started-guide.pdf for understanding
- DPDK kernel requirement.
+ - OVS current development can be clone using 'git' tool
-3. Configure & build OVS:
+ ```
+ cd /usr/src/
+ git clone https://github.com/openvswitch/ovs.git
+ export OVS_DIR=/usr/src/ovs
+ ```
- * Non IVSHMEM:
+ - Install OVS dependencies
- `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/`
+ GNU make, GCC 4.x (or) Clang 3.4, libnuma (Mandatory)
+ libssl, libcap-ng, Python 2.7 (Optional)
+ More information can be found at [Build Requirements]
- * IVSHMEM:
+ - Configure, Install OVS
- `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/`
+ ```
+ cd $OVS_DIR
+ ./boot.sh
+ ./configure --with-dpdk=$DPDK_BUILD
+ make install
+ ```
- ```
- cd $(OVS_DIR)/openvswitch
- ./boot.sh
- ./configure --with-dpdk=$DPDK_BUILD
- make
- ```
+ Note: Passing DPDK_BUILD can be skipped if DPDK library is installed in
+ standard locations i.e `./configure --with-dpdk` should suffice.
-To have better performance one can enable aggressive compiler optimizations and
-use the special instructions(popcnt, crc32) that may not be available on all
-machines. Instead of typing `make`, type:
+ Additional information can be found in [INSTALL.md].
-`make CFLAGS='-O3 -march=native'`
+## <a name="ovssetup"></a> 3. Setup OVS with DPDK datapath
-Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS.
+### 3.1 Setup Hugepages
-Using the DPDK with ovs-vswitchd:
----------------------------------
+ Allocate and mount 2M Huge pages:
-1. Setup system boot
- Add the following options to the kernel bootline:
-
- `default_hugepagesz=1GB hugepagesz=1G hugepages=1`
+ - For persistent allocation of huge pages, write to hugepages.conf file
+ in /etc/sysctl.d
-2. Setup DPDK devices:
+ `echo 'vm.nr_hugepages=2048' > /etc/sysctl.d/hugepages.conf`
- DPDK devices can be setup using either the VFIO (for DPDK 1.7+) or UIO
- modules. UIO requires inserting an out of tree driver igb_uio.ko that is
- available in DPDK. Setup for both methods are described below.
+ - For run-time allocation of huge pages
- * UIO:
- 1. insert uio.ko: `modprobe uio`
- 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko`
- 3. Bind network device to igb_uio:
- `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1`
+ `sysctl -w vm.nr_hugepages=N` where N = No. of 2M huge pages allocated
- * VFIO:
+ - To verify hugepage configuration
- VFIO needs to be supported in the kernel and the BIOS. More information
- can be found in the [DPDK Linux GSG].
+ `grep HugePages_ /proc/meminfo`
- 1. Insert vfio-pci.ko: `modprobe vfio-pci`
- 2. Set correct permissions on vfio device: `sudo /usr/bin/chmod a+x /dev/vfio`
- and: `sudo /usr/bin/chmod 0666 /dev/vfio/*`
- 3. Bind network device to vfio-pci:
- `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=vfio-pci eth1`
+ - Mount hugepages
-3. Mount the hugetable filsystem
+ `mount -t hugetlbfs none /dev/hugepages`
- `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`
+ Note: Mount hugepages if not already mounted by default.
- Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup.
+### 3.2 Setup DPDK devices using VFIO
-4. Follow the instructions in [INSTALL.md] to install only the
- userspace daemons and utilities (via 'make install').
- 1. First time only db creation (or clearing):
+ - Supported with kernel version >= 3.6
+ - VFIO needs support from BIOS and kernel.
+ - BIOS changes:
- ```
- mkdir -p /usr/local/etc/openvswitch
- mkdir -p /usr/local/var/run/openvswitch
- rm /usr/local/etc/openvswitch/conf.db
- ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
- /usr/local/share/openvswitch/vswitch.ovsschema
- ```
+ Enable VT-d, can be verified from `dmesg | grep -e DMAR -e IOMMU` output
- 2. Start ovsdb-server
+ - GRUB bootline:
- ```
- ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
- --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
- --private-key=db:Open_vSwitch,SSL,private_key \
- --certificate=Open_vSwitch,SSL,certificate \
- --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
- ```
+ Add `iommu=pt intel_iommu=on`, can be verified from `cat /proc/cmdline` output
- 3. First time after db creation, initialize:
+ - Load modules and bind the NIC to VFIO driver
- ```
- ovs-vsctl --no-wait init
- ```
+ ```
+ modprobe vfio-pci
+ sudo /usr/bin/chmod a+x /dev/vfio
+ sudo /usr/bin/chmod 0666 /dev/vfio/*
+ $DPDK_DIR/tools/dpdk_nic_bind.py --bind=vfio-pci eth1
+ $DPDK_DIR/tools/dpdk_nic_bind.py --status
+ ```
+
+ Note: If running kernels < 3.6 UIO drivers to be used,
+ please check [DPDK in the VM], DPDK devices using UIO section for the steps.
+
+### 3.3 Setup OVS
+
+ 1. DB creation (One time step)
+
+ ```
+ mkdir -p /usr/local/etc/openvswitch
+ mkdir -p /usr/local/var/run/openvswitch
+ rm /usr/local/etc/openvswitch/conf.db
+ ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
+ /usr/local/share/openvswitch/vswitch.ovsschema
+ ```
+
+ 2. Start ovsdb-server
+
+ No SSL support
+
+ ```
+ ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
+ --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
+ --pidfile --detach
+ ```
+
+ SSL support
+
+ ```
+ ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
+ --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
+ --private-key=db:Open_vSwitch,SSL,private_key \
+ --certificate=Open_vSwitch,SSL,certificate \
+ --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
+ ```
+
+ 3. Initialize DB (One time step)
+
+ ```
+ ovs-vsctl --no-wait init
+ ```
+
+ 4. Start vswitchd
+
+ DPDK configuration arguments can be passed to vswitchd via Open_vSwitch
+ 'other_config' column. The important configuration options are listed below.
+ Defaults will be provided for all values not explicitly set. Refer
+ ovs-vswitchd.conf.db(5) for additional information on configuration options.
+
+ * dpdk-init
+ Specifies whether OVS should initialize and support DPDK ports. This is
+ a boolean, and defaults to false.
+
+ * dpdk-lcore-mask
+ Specifies the CPU cores on which dpdk lcore threads should be spawned and
+ expects hex string (eg '0x123').
+
+ * dpdk-socket-mem
+ Comma separated list of memory to pre-allocate from hugepages on specific
+ sockets.
+
+ * dpdk-hugepage-dir
+ Directory where hugetlbfs is mounted
+
+ * vhost-sock-dir
+ Option to set the path to the vhost_user unix socket files.
+
+ NOTE: Changing any of these options requires restarting the ovs-vswitchd
+ application.
+
+ Open vSwitch can be started as normal. DPDK will be initialized as long
+ as the dpdk-init option has been set to 'true'.
+
+ ```
+ export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
+ ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
+ ovs-vswitchd unix:$DB_SOCK --pidfile --detach
+ ```
+
+ If allocated more than one GB hugepage (as for IVSHMEM), set amount and
+ use NUMA node 0 memory. For details on using ivshmem with DPDK, refer to
+ [OVS Testcases].
+
+ ```
+ ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,0"
+ ovs-vswitchd unix:$DB_SOCK --pidfile --detach
+ ```
-5. Start vswitchd:
+ To better scale the work loads across cores, Multiple pmd threads can be
+ created and pinned to CPU cores by explicity specifying pmd-cpu-mask.
+ eg: To spawn 2 pmd threads and pin them to cores 1, 2
- DPDK configuration arguments can be passed to vswitchd via `--dpdk`
- argument. This needs to be first argument passed to vswitchd process.
- dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter
- for dpdk initialization.
+ ```
+ ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=6
+ ```
- ```
- export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
- ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach
- ```
+ 5. Create bridge & add DPDK devices
- If allocated more than one GB hugepage (as for IVSHMEM), set amount and
- use NUMA node 0 memory:
+ create a bridge with datapath_type "netdev" in the configuration database
- ```
- ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \
- -- unix:$DB_SOCK --pidfile --detach
- ```
+ `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`
-6. Add bridge & ports
+ Now you can add DPDK devices. OVS expects DPDK device names to start with
+ "dpdk" and end with a portid. vswitchd should print (in the log file) the
+ number of dpdk devices found.
- To use ovs-vswitchd with DPDK, create a bridge with datapath_type
- "netdev" in the configuration database. For example:
+ ```
+ ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
+ ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
+ ```
- `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`
+ After the DPDK ports get added to switch, a polling thread continuously polls
+ DPDK devices and consumes 100% of the core as can be checked from 'top' and 'ps' cmds.
- Now you can add dpdk devices. OVS expect DPDK device name start with dpdk
- and end with portid. vswitchd should print (in the log file) the number
- of dpdk devices found.
+ ```
+ top -H
+ ps -eLo pid,psr,comm | grep pmd
+ ```
- ```
- ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
- ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
- ```
+ Note: creating bonds of DPDK interfaces is slightly different to creating
+ bonds of system interfaces. For DPDK, the interface type must be explicitly
+ set, for example:
- Once first DPDK port is added to vswitchd, it creates a Polling thread and
- polls dpdk device in continuous loop. Therefore CPU utilization
- for that thread is always 100%.
+ ```
+ ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 -- set Interface dpdk0 type=dpdk -- set Interface dpdk1 type=dpdk
+ ```
-7. Add test flows
+ 6. PMD thread statistics
- Test flow script across NICs (assuming ovs in /usr/src/ovs):
- Execute script:
+ ```
+ # Check current stats
+ ovs-appctl dpif-netdev/pmd-stats-show
- ```
- #! /bin/sh
- # Move to command directory
- cd /usr/src/ovs/utilities/
+ # Show port/rxq assignment
+ ovs-appctl dpif-netdev/pmd-rxq-show
- # Clear current flows
- ./ovs-ofctl del-flows br0
+ # Clear previous stats
+ ovs-appctl dpif-netdev/pmd-stats-clear
+ ```
- # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
- ./ovs-ofctl add-flow br0 in_port=1,action=output:2
- ./ovs-ofctl add-flow br0 in_port=2,action=output:1
- ```
+ 7. Stop vswitchd & Delete bridge
-8. Performance tuning
+ ```
+ ovs-appctl -t ovs-vswitchd exit
+ ovs-appctl -t ovsdb-server exit
+ ovs-vsctl del-br br0
+ ```
+
+## <a name="builddpdk"></a> 4. DPDK in the VM
- With pmd multi-threading support, OVS creates one pmd thread for each
- numa node as default. The pmd thread handles the I/O of all DPDK
- interfaces on the same numa node. The following two commands can be used
- to configure the multi-threading behavior.
+DPDK 'testpmd' application can be run in the Guest VM for high speed
+packet forwarding between vhostuser ports. DPDK and testpmd application
+has to be compiled on the guest VM. Below are the steps for setting up
+the testpmd application in the VM. More information on the vhostuser ports
+can be found in [Vhost Walkthrough].
- `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>`
+ * Instantiate the Guest
- The command above asks for a CPU mask for setting the affinity of pmd
- threads. A set bit in the mask means a pmd thread is created and pinned
- to the corresponding CPU core. For more information, please refer to
- `man ovs-vswitchd.conf.db`
+ ```
+ Qemu version >= 2.2.0
- `ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<integer>`
+ export VM_NAME=Centos-vm
+ export GUEST_MEM=3072M
+ export QCOW2_IMAGE=/root/CentOS7_x86_64.qcow2
+ export VHOST_SOCK_DIR=/usr/local/var/run/openvswitch
- The command above sets the number of rx queues of each DPDK interface. The
- rx queues are assigned to pmd threads on the same numa node in round-robin
- fashion. For more information, please refer to `man ovs-vswitchd.conf.db`
+ qemu-system-x86_64 -name $VM_NAME -cpu host -enable-kvm -m $GUEST_MEM -object memory-backend-file,id=mem,size=$GUEST_MEM,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -smp sockets=1,cores=2 -drive file=$QCOW2_IMAGE -chardev socket,id=char0,path=$VHOST_SOCK_DIR/dpdkvhostuser0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=off -chardev socket,id=char1,path=$VHOST_SOCK_DIR/dpdkvhostuser1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=off --nographic -snapshot
+ ```
- Ideally for maximum throughput, the pmd thread should not be scheduled out
- which temporarily halts its execution. The following affinitization methods
- can help.
+ * Download the DPDK Srcs to VM and build DPDK
- Lets pick core 4,6,8,10 for pmd threads to run on. Also assume a dual 8 core
- sandy bridge system with hyperthreading enabled where CPU1 has cores 0,...,7
- and 16,...,23 & CPU2 cores 8,...,15 & 24,...,31. (A different cpu
- configuration could have different core mask requirements).
+ ```
+ cd /root/dpdk/
+ wget http://dpdk.org/browse/dpdk/snapshot/dpdk-16.04.zip
+ unzip dpdk-16.04.zip
+ export DPDK_DIR=/root/dpdk/dpdk-16.04
+ export DPDK_TARGET=x86_64-native-linuxapp-gcc
+ export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
+ cd $DPDK_DIR
+ make install T=$DPDK_TARGET DESTDIR=install
+ ```
- To kernel bootline add core isolation list for cores and associated hype cores
- (e.g. isolcpus=4,20,6,22,8,24,10,26,). Reboot system for isolation to take
- effect, restart everything.
+ * Build the test-pmd application
- Configure pmd threads on core 4,6,8,10 using 'pmd-cpu-mask':
+ ```
+ cd app/test-pmd
+ export RTE_SDK=$DPDK_DIR
+ export RTE_TARGET=$DPDK_TARGET
+ make
+ ```
- `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=00000550`
+ * Setup Huge pages and DPDK devices using UIO
- You should be able to check that pmd threads are pinned to the correct cores
- via:
+ ```
+ sysctl vm.nr_hugepages=1024
+ mkdir -p /dev/hugepages
+ mount -t hugetlbfs hugetlbfs /dev/hugepages (only if not already mounted)
+ modprobe uio
+ insmod $DPDK_BUILD/kmod/igb_uio.ko
+ $DPDK_DIR/tools/dpdk_nic_bind.py --status
+ $DPDK_DIR/tools/dpdk_nic_bind.py -b igb_uio 00:03.0 00:04.0
+ ```
- ```
- top -p `pidof ovs-vswitchd` -H -d1
- ```
+ vhost ports pci ids can be retrieved using `lspci | grep Ethernet` cmd.
- Note, the pmd threads on a numa node are only created if there is at least
- one DPDK interface from the numa node that has been added to OVS.
+## <a name="ovstc"></a> 5. OVS Testcases
- Note, core 0 is always reserved from non-pmd threads and should never be set
- in the cpu mask.
+ Below are few testcases and the list of steps to be followed.
-DPDK Rings :
-------------
+### 5.1 PHY-PHY
-Following the steps above to create a bridge, you can now add dpdk rings
-as a port to the vswitch. OVS will expect the DPDK ring device name to
-start with dpdkr and end with a portid.
+ The steps (1-5) in 3.3 section will create & initialize DB, start vswitchd and also
+ add DPDK devices to bridge 'br0'.
-`ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr`
+ 1. Add Test flows to forward packets betwen DPDK port 0 and port 1
-DPDK rings client test application
+ ```
+ # Clear current flows
+ ovs-ofctl del-flows br0
-Included in the test directory is a sample DPDK application for testing
-the rings. This is from the base dpdk directory and modified to work
-with the ring naming used within ovs.
+ # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
+ ovs-ofctl add-flow br0 in_port=1,action=output:2
+ ovs-ofctl add-flow br0 in_port=2,action=output:1
+ ```
-location tests/ovs_client
+### 5.2 PHY-VM-PHY [VHOST LOOPBACK]
-To run the client :
+ The steps (1-5) in 3.3 section will create & initialize DB, start vswitchd and also
+ add DPDK devices to bridge 'br0'.
-```
-cd /usr/src/ovs/tests/
-ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr"
-```
+ 1. Add dpdkvhostuser ports to bridge 'br0'. More information on the dpdkvhostuser ports
+ can be found in [Vhost Walkthrough].
-In the case of the dpdkr example above the "port id you gave dpdkr" is 0.
+ ```
+ ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser
+ ovs-vsctl add-port br0 dpdkvhostuser1 -- set Interface dpdkvhostuser1 type=dpdkvhostuser
+ ```
-It is essential to have --proc-type=secondary
+ 2. Add Test flows to forward packets betwen DPDK devices and VM ports
-The application simply receives an mbuf on the receive queue of the
-ethernet ring and then places that same mbuf on the transmit ring of
-the ethernet ring. It is a trivial loopback application.
+ ```
+ # Clear current flows
+ ovs-ofctl del-flows br0
-DPDK rings in VM (IVSHMEM shared memory communications)
--------------------------------------------------------
+ # Add flows
+ ovs-ofctl add-flow br0 in_port=1,action=output:3
+ ovs-ofctl add-flow br0 in_port=3,action=output:1
+ ovs-ofctl add-flow br0 in_port=4,action=output:2
+ ovs-ofctl add-flow br0 in_port=2,action=output:4
-In addition to executing the client in the host, you can execute it within
-a guest VM. To do so you will need a patched qemu. You can download the
-patch and getting started guide at :
+ # Dump flows
+ ovs-ofctl dump-flows br0
+ ```
-https://01.org/packet-processing/downloads
+ 3. Instantiate Guest VM using Qemu cmdline
-A general rule of thumb for better performance is that the client
-application should not be assigned the same dpdk core mask "-c" as
-the vswitchd.
+ Guest Configuration
-Restrictions:
--------------
+ ```
+ | configuration | values | comments
+ |----------------------|--------|-----------------
+ | qemu version | 2.2.0 |
+ | qemu thread affinity | core 5 | taskset 0x20
+ | memory | 4GB | -
+ | cores | 2 | -
+ | Qcow2 image | CentOS7| -
+ | mrg_rxbuf | off | -
+ ```
- - This Support is for Physical NIC. I have tested with Intel NIC only.
- - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue.
- - Currently DPDK port does not make use any offload functionality.
+ Instantiate Guest
+
+ ```
+ export VM_NAME=vhost-vm
+ export GUEST_MEM=3072M
+ export QCOW2_IMAGE=/root/CentOS7_x86_64.qcow2
+ export VHOST_SOCK_DIR=/usr/local/var/run/openvswitch
+
+ taskset 0x20 qemu-system-x86_64 -name $VM_NAME -cpu host -enable-kvm -m $GUEST_MEM -object memory-backend-file,id=mem,size=$GUEST_MEM,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -smp sockets=1,cores=2 -drive file=$QCOW2_IMAGE -chardev socket,id=char0,path=$VHOST_SOCK_DIR/dpdkvhostuser0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=off -chardev socket,id=char1,path=$VHOST_SOCK_DIR/dpdkvhostuser1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=off --nographic -snapshot
+ ```
+
+ 4. Guest VM using libvirt
+
+ The below is a simple xml configuration of 'demovm' guest that can be instantiated
+ using 'virsh'. The guest uses a pair of vhostuser port and boots with 4GB RAM and 2 cores.
+ More information can be found in [Vhost Walkthrough].
+
+ ```
+ <domain type='kvm'>
+ <name>demovm</name>
+ <uuid>4a9b3f53-fa2a-47f3-a757-dd87720d9d1d</uuid>
+ <memory unit='KiB'>4194304</memory>
+ <currentMemory unit='KiB'>4194304</currentMemory>
+ <memoryBacking>
+ <hugepages>
+ <page size='2' unit='M' nodeset='0'/>
+ </hugepages>
+ </memoryBacking>
+ <vcpu placement='static'>2</vcpu>
+ <cputune>
+ <shares>4096</shares>
+ <vcpupin vcpu='0' cpuset='4'/>
+ <vcpupin vcpu='1' cpuset='5'/>
+ <emulatorpin cpuset='4,5'/>
+ </cputune>
+ <os>
+ <type arch='x86_64' machine='pc'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <features>
+ <acpi/>
+ <apic/>
+ </features>
+ <cpu mode='host-model'>
+ <model fallback='allow'/>
+ <topology sockets='2' cores='1' threads='1'/>
+ <numa>
+ <cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/>
+ </numa>
+ </cpu>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu-kvm</emulator>
+ <disk type='file' device='disk'>
+ <driver name='qemu' type='qcow2' cache='none'/>
+ <source file='/root/CentOS7_x86_64.qcow2'/>
+ <target dev='vda' bus='virtio'/>
+ </disk>
+ <disk type='dir' device='disk'>
+ <driver name='qemu' type='fat'/>
+ <source dir='/usr/src/dpdk-16.04'/>
+ <target dev='vdb' bus='virtio'/>
+ <readonly/>
+ </disk>
+ <interface type='vhostuser'>
+ <mac address='00:00:00:00:00:01'/>
+ <source type='unix' path='/usr/local/var/run/openvswitch/dpdkvhostuser0' mode='client'/>
+ <model type='virtio'/>
+ <driver queues='2'>
+ <host mrg_rxbuf='off'/>
+ </driver>
+ </interface>
+ <interface type='vhostuser'>
+ <mac address='00:00:00:00:00:02'/>
+ <source type='unix' path='/usr/local/var/run/openvswitch/dpdkvhostuser1' mode='client'/>
+ <model type='virtio'/>
+ <driver queues='2'>
+ <host mrg_rxbuf='off'/>
+ </driver>
+ </interface>
+ <serial type='pty'>
+ <target port='0'/>
+ </serial>
+ <console type='pty'>
+ <target type='serial' port='0'/>
+ </console>
+ </devices>
+ </domain>
+ ```
+
+ 5. DPDK Packet forwarding in Guest VM
+
+ To accomplish this, DPDK and testpmd application have to be first compiled
+ on the VM and the steps are listed in [DPDK in the VM].
+
+ * Run test-pmd application
+
+ ```
+ cd $DPDK_DIR/app/test-pmd;
+ ./testpmd -c 0x3 -n 4 --socket-mem 1024 -- --burst=64 -i --txqflags=0xf00 --disable-hw-vlan
+ set fwd mac_retry
+ start
+ ```
+
+ * Bind vNIC back to kernel once the test is completed.
+
+ ```
+ $DPDK_DIR/tools/dpdk_nic_bind.py --bind=virtio-pci 0000:00:03.0
+ $DPDK_DIR/tools/dpdk_nic_bind.py --bind=virtio-pci 0000:00:04.0
+ ```
+ Note: Appropriate PCI IDs to be passed in above example. The PCI IDs can be
+ retrieved using '$DPDK_DIR/tools/dpdk_nic_bind.py --status' cmd.
+
+### 5.3 PHY-VM-PHY [IVSHMEM]
+
+ The steps for setup of IVSHMEM are covered in section 5.2(PVP - IVSHMEM)
+ of [OVS Testcases] in ADVANCED install guide.
+
+## <a name="ovslimits"></a> 6. Limitations
+
+ - Supports MTU size 1500, MTU setting for DPDK netdevs will be in future OVS release.
+ - Currently DPDK ports does not use HW offload functionality.
+ - Network Interface Firmware requirements:
+ Each release of DPDK is validated against a specific firmware version for
+ a supported Network Interface. New firmware versions introduce bug fixes,
+ performance improvements and new functionality that DPDK leverages. The
+ validated firmware versions are available as part of the release notes for
+ DPDK. It is recommended that users update Network Interface firmware to
+ match what has been validated for the DPDK release.
+
+ For DPDK 16.04, the list of validated firmware versions can be found at:
+
+ http://dpdk.org/doc/guides/rel_notes/release_16_04.html
- ivshmem:
- - The shared memory is currently restricted to the use of a 1GB
- huge pages.
- - All huge pages are shared amongst the host, clients, virtual
- machines etc.
Bug Reporting:
--------------
Please report problems to bugs@openvswitch.org.
-[INSTALL.userspace.md]:INSTALL.userspace.md
+
+[DPDK requirements]: http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html
+[Download DPDK]: http://dpdk.org/browse/dpdk/refs/
+[Download OVS]: http://openvswitch.org/releases/
+[DPDK Supported NICs]: http://dpdk.org/doc/nics
+[Build Requirements]: https://github.com/openvswitch/ovs/blob/master/INSTALL.md#build-requirements
+[INSTALL.DPDK-ADVANCED.md]: INSTALL.DPDK-ADVANCED.md
+[OVS Testcases]: INSTALL.DPDK-ADVANCED.md#ovstc
+[Vhost Walkthrough]: INSTALL.DPDK-ADVANCED.md#vhost
+[DPDK in the VM]: INSTALL.DPDK.md#builddpdk
[INSTALL.md]:INSTALL.md
-[DPDK Linux GSG]: http://www.dpdk.org/doc/guides/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-igb-uioor-vfio-modules
+[INSTALL.Fedora.md]:INSTALL.Fedora.md
+[INSTALL.RHEL.md]:INSTALL.RHEL.md
+[INSTALL.Debian.md]:INSTALL.Debian.md