Building and Installing:
------------------------
-Required DPDK 1.8.0, `fuse`, `fuse-devel` (`libfuse-dev` on Debian/Ubuntu)
+Required DPDK 2.0, `fuse`, `fuse-devel` (`libfuse-dev` on Debian/Ubuntu)
1. Configure build & install DPDK:
1. Set `$DPDK_DIR`
```
- export DPDK_DIR=/usr/src/dpdk-1.8.0
+ export DPDK_DIR=/usr/src/dpdk-2.0
cd $DPDK_DIR
```
`CONFIG_RTE_BUILD_COMBINE_LIBS=y`
Update `config/common_linuxapp` so that DPDK is built with vhost
- libraries:
+ libraries; currently, OVS only supports vhost-cuse, so DPDK vhost-user
+ libraries should be explicitly turned off (they are enabled by default
+ in DPDK 2.0).
`CONFIG_RTE_LIBRTE_VHOST=y`
+ `CONFIG_RTE_LIBRTE_VHOST_USER=n`
Then run `make install` to build and install the library.
For default install without IVSHMEM:
```
cd $(OVS_DIR)/openvswitch
./boot.sh
- ./configure --with-dpdk=$DPDK_BUILD
+ ./configure --with-dpdk=$DPDK_BUILD [CFLAGS="-g -O2 -Wno-cast-align"]
make
```
+ Note: 'clang' users may specify the '-Wno-cast-align' flag to suppress DPDK cast-align warnings.
+
To have better performance one can enable aggressive compiler optimizations and
use the special instructions(popcnt, crc32) that may not be available on all
machines. Instead of typing `make`, type:
polls dpdk device in continuous loop. Therefore CPU utilization
for that thread is always 100%.
+ Note: creating bonds of DPDK interfaces is slightly different to creating
+ bonds of system interfaces. For DPDK, the interface type must be explicitly
+ set, for example:
+
+ ```
+ ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 -- set Interface dpdk0 type=dpdk -- set Interface dpdk1 type=dpdk
+ ```
+
7. Add test flows
Test flow script across NICs (assuming ovs in /usr/src/ovs):
- DPDK-vHost support works with 1G huge pages.
ivshmem:
- - The shared memory is currently restricted to the use of a 1GB
- huge pages.
- - All huge pages are shared amongst the host, clients, virtual
- machines etc.
+ - If you run Open vSwitch with smaller page sizes (e.g. 2MB), you may be
+ unable to share any rings or mempools with a virtual machine.
+ This is because the current implementation of ivshmem works by sharing
+ a single 1GB huge page from the host operating system to any guest
+ operating system through the Qemu ivshmem device. When using smaller
+ page sizes, multiple pages may be required to hold the ring descriptors
+ and buffer pools. The Qemu ivshmem device does not allow you to share
+ multiple file descriptors to the guest operating system. However, if you
+ want to share dpdkr rings with other processes on the host, you can do
+ this with smaller page sizes.
Bug Reporting:
--------------