Using Open vSwitch with DPDK ============================ Open vSwitch can use Intel(R) DPDK lib to operate entirely in userspace. This file explains how to install and use Open vSwitch in such a mode. The DPDK support of Open vSwitch is considered experimental. It has not been thoroughly tested. This version of Open vSwitch should be built manually with "configure" and "make". Building and Installing: ------------------------ Required DPDK 1.7. DPDK: Set dir i.g.: export DPDK_DIR=/usr/src/dpdk-1.7.0 cd $DPDK_DIR update config/common_linuxapp so that dpdk generate single lib file. (modification also required for IVSHMEM build) CONFIG_RTE_BUILD_COMBINE_LIBS=y For default install without IVSHMEM: make install T=x86_64-native-linuxapp-gcc To include IVSHMEM (shared memory): make install T=x86_64-ivshmem-linuxapp-gcc For details refer to http://dpdk.org/ Linux kernel: Refer to intel-dpdk-getting-started-guide.pdf for understanding DPDK kernel requirement. OVS: Non IVSHMEM: export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/ IVSHMEM: export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/ cd $(OVS_DIR)/openvswitch ./boot.sh ./configure --with-dpdk=$DPDK_BUILD make Refer to INSTALL.userspace for general requirements of building userspace OVS. Using the DPDK with ovs-vswitchd: --------------------------------- Setup system boot: kernel bootline, add: default_hugepagesz=1GB hugepagesz=1G hugepages=1 First setup DPDK devices: - insert uio.ko e.g. modprobe uio - insert igb_uio.ko e.g. insmod $DPDK_BUILD/kmod/igb_uio.ko - Bind network device to igb_uio. e.g. $DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1 Alternate binding method: Find target Ethernet devices lspci -nn|grep Ethernet Bring Down (e.g. eth2, eth3) ifconfig eth2 down ifconfig eth3 down Look at current devices (e.g ixgbe devices) ls /sys/bus/pci/drivers/ixgbe/ 0000:02:00.0 0000:02:00.1 bind module new_id remove_id uevent unbind Unbind target pci devices from current driver (e.g. 02:00.0 ...) echo 0000:02:00.0 > /sys/bus/pci/drivers/ixgbe/unbind echo 0000:02:00.1 > /sys/bus/pci/drivers/ixgbe/unbind Bind to target driver (e.g. igb_uio) echo 0000:02:00.0 > /sys/bus/pci/drivers/igb_uio/bind echo 0000:02:00.1 > /sys/bus/pci/drivers/igb_uio/bind Check binding for listed devices ls /sys/bus/pci/drivers/igb_uio 0000:02:00.0 0000:02:00.1 bind module new_id remove_id uevent unbind Prepare system: - mount hugetlbfs e.g. mount -t hugetlbfs -o pagesize=1G none /dev/hugepages Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup. Start ovsdb-server as discussed in INSTALL doc: Summary e.g.: First time only db creation (or clearing): mkdir -p /usr/local/etc/openvswitch mkdir -p /usr/local/var/run/openvswitch rm /usr/local/etc/openvswitch/conf.db cd $OVS_DIR ./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db \ ./vswitchd/vswitch.ovsschema start ovsdb-server cd $OVS_DIR ./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,Open_vSwitch,manager_options \ --private-key=db:Open_vSwitch,SSL,private_key \ --certificate=Open_vSwitch,SSL,certificate \ --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach First time after db creation, initialize: cd $OVS_DIR ./utilities/ovs-vsctl --no-wait init Start vswitchd: DPDK configuration arguments can be passed to vswitchd via `--dpdk` argument. This needs to be first argument passed to vswitchd process. dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter for dpdk initialization. e.g. export DB_SOCK=/usr/local/var/run/openvswitch/db.sock ./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach If allocated more than one GB hugepage (as for IVSHMEM), set amount and use NUMA node 0 memory: ./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \ -- unix:$DB_SOCK --pidfile --detach To use ovs-vswitchd with DPDK, create a bridge with datapath_type "netdev" in the configuration database. For example: ovs-vsctl add-br br0 ovs-vsctl set bridge br0 datapath_type=netdev Now you can add dpdk devices. OVS expect DPDK device name start with dpdk and end with portid. vswitchd should print number of dpdk devices found. ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk Once first DPDK port is added to vswitchd, it creates a Polling thread and polls dpdk device in continuous loop. Therefore CPU utilization for that thread is always 100%. Test flow script across NICs (assuming ovs in /usr/src/ovs): Assume 1.1.1.1 on NIC port 1 (dpdk0) Assume 1.1.1.2 on NIC port 2 (dpdk1) Execute script: ############################# Script: #! /bin/sh # Move to command directory cd /usr/src/ovs/utilities/ # Clear current flows ./ovs-ofctl del-flows br0 # Add flows between port 1 (dpdk0) to port 2 (dpdk1) ./ovs-ofctl add-flow br0 in_port=1,dl_type=0x800,nw_src=1.1.1.1,\ nw_dst=1.1.1.2,idle_timeout=0,action=output:2 ./ovs-ofctl add-flow br0 in_port=2,dl_type=0x800,nw_src=1.1.1.2,\ nw_dst=1.1.1.1,idle_timeout=0,action=output:1 ###################################### With pmd multi-threading support, OVS creates one pmd thread for each numa node as default. The pmd thread handles the I/O of all DPDK interfaces on the same numa node. The following two commands can be used to configure the multi-threading behavior. ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask= The command above asks for a CPU mask for setting the affinity of pmd threads. A set bit in the mask means a pmd thread is created and pinned to the corresponding CPU core. For more information, please refer to `man ovs-vswitchd.conf.db` ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs= The command above sets the number of rx queues of each DPDK interface. The rx queues are assigned to pmd threads on the same numa node in round-robin fashion. For more information, please refer to `man ovs-vswitchd.conf.db` Ideally for maximum throughput, the pmd thread should not be scheduled out which temporarily halts its execution. The following affinitization methods can help. Lets pick core 4,6,8,10 for pmd threads to run on. Also assume a dual 8 core sandy bridge system with hyperthreading enabled where CPU1 has cores 0,...,7 and 16,...,23 & CPU2 cores 8,...,15 & 24,...,31. (A different cpu configuration could have different core mask requirements). To kernel bootline add core isolation list for cores and associated hype cores (e.g. isolcpus=4,20,6,22,8,24,10,26,). Reboot system for isolation to take effect, restart everything. Configure pmd threads on core 4,6,8,10 using 'pmd-cpu-mask': ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=00000550 You should be able to check that pmd threads are pinned to the correct cores via: top -p `pidof ovs-vswitchd` -H -d1 Note, the pmd threads on a numa node are only created if there is at least one DPDK interface from the numa node that has been added to OVS. Note, core 0 is always reserved from non-pmd threads and should never be set in the cpu mask. DPDK Rings : ------------ Following the steps above to create a bridge, you can now add dpdk rings as a port to the vswitch. OVS will expect the DPDK ring device name to start with dpdkr and end with a portid. ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr DPDK rings client test application Included in the test directory is a sample DPDK application for testing the rings. This is from the base dpdk directory and modified to work with the ring naming used within ovs. location tests/ovs_client To run the client : cd /usr/src/ovs/tests/ ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr" In the case of the dpdkr example above the "port id you gave dpdkr" is 0. It is essential to have --proc-type=secondary The application simply receives an mbuf on the receive queue of the ethernet ring and then places that same mbuf on the transmit ring of the ethernet ring. It is a trivial loopback application. DPDK rings in VM (IVSHMEM shared memory communications) ------------------------------------------------------- In addition to executing the client in the host, you can execute it within a guest VM. To do so you will need a patched qemu. You can download the patch and getting started guide at : https://01.org/packet-processing/downloads A general rule of thumb for better performance is that the client application should not be assigned the same dpdk core mask "-c" as the vswitchd. Restrictions: ------------- - This Support is for Physical NIC. I have tested with Intel NIC only. - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue. - Currently DPDK port does not make use any offload functionality. ivshmem - The shared memory is currently restricted to the use of a 1GB huge pages. - All huge pages are shared amongst the host, clients, virtual machines etc. Bug Reporting: -------------- Please report problems to bugs@openvswitch.org.