dpif-netdev: Unique and sequential tx_qids.
authorIlya Maximets <i.maximets@samsung.com>
Tue, 26 Jan 2016 06:12:34 +0000 (09:12 +0300)
committerDaniele Di Proietto <diproiettod@vmware.com>
Tue, 26 Jan 2016 19:40:45 +0000 (11:40 -0800)
commit347ba9bb9b7a8e053ced54016f903e749ebecb7f
tree36410b1a78ee211d3509a7edb3564a0c3d92a7c9
parentae7ad0a15ef5c9c045419e1dbcfb7b98b05c7b5a
dpif-netdev: Unique and sequential tx_qids.

Currently tx_qid is equal to pmd->core_id. This leads to unexpected
behavior if pmd-cpu-mask different from '/(0*)(1|3|7)?(f*)/',
e.g. if core_ids are not sequential, or doesn't start from 0, or both.

Example:
starting 2 pmd threads with 1 port, 2 rxqs per port,
pmd-cpu-mask = 00000014 and let dev->real_n_txq = 2

It that case pmd_1->tx_qid = 2, pmd_2->tx_qid = 4 and
txq_needs_locking = true (if device hasn't ovs_numa_get_n_cores()+1
queues).

In that case, after truncating in netdev_dpdk_send__():
'qid = qid % dev->real_n_txq;'
pmd_1: qid = 2 % 2 = 0
pmd_2: qid = 4 % 2 = 0

So, both threads will call dpdk_queue_pkts() with same qid = 0.
This is unexpected behavior if there is 2 tx queues in device.
Queue #1 will not be used and both threads will lock queue #0
on each send.

Fix that by using sequential tx_qids.

Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Signed-off-by: Daniele Di Proietto <diproiettod@vmware.com>
lib/dpif-netdev.c