dpif-netdev: Unique and sequential tx_qids.
authorIlya Maximets <i.maximets@samsung.com>
Tue, 26 Jan 2016 06:12:34 +0000 (09:12 +0300)
committerDaniele Di Proietto <diproiettod@vmware.com>
Wed, 27 Jan 2016 04:58:27 +0000 (20:58 -0800)
commitc293b7c7f43aa6ed5ccf566b712543bd7aed4071
tree6d958c81e9be4dc2ba746aa8173b20f3a0e0f9de
parent8381ecaa9c8ff4185e89b90abf95b80e2eb52dab
dpif-netdev: Unique and sequential tx_qids.

Currently tx_qid is equal to pmd->core_id. This leads to unexpected
behavior if pmd-cpu-mask different from '/(0*)(1|3|7)?(f*)/',
e.g. if core_ids are not sequential, or doesn't start from 0, or both.

Example:
starting 2 pmd threads with 1 port, 2 rxqs per port,
pmd-cpu-mask = 00000014 and let dev->real_n_txq = 2

It that case pmd_1->tx_qid = 2, pmd_2->tx_qid = 4 and
txq_needs_locking = true (if device hasn't ovs_numa_get_n_cores()+1
queues).

In that case, after truncating in netdev_dpdk_send__():
'qid = qid % dev->real_n_txq;'
pmd_1: qid = 2 % 2 = 0
pmd_2: qid = 4 % 2 = 0

So, both threads will call dpdk_queue_pkts() with same qid = 0.
This is unexpected behavior if there is 2 tx queues in device.
Queue #1 will not be used and both threads will lock queue #0
on each send.

Fix that by using sequential tx_qids.

Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Signed-off-by: Daniele Di Proietto <diproiettod@vmware.com>
lib/dpif-netdev.c