Revert "dm: only run the queue on completion if congested or no requests pending"
authorMike Snitzer <snitzer@redhat.com>
Wed, 8 Jul 2015 20:08:24 +0000 (16:08 -0400)
committerMike Snitzer <snitzer@redhat.com>
Wed, 8 Jul 2015 20:16:07 +0000 (16:16 -0400)
This reverts commit 9a0e609e3fd8a95c96629b9fbde6b8c5b9a1456a.
(Resolved a conflict during revert due to commit bfebd1cdb4 that came
after)

This revert is motivated by a couple failure reports on request-based DM
multipath testbeds:
1) Netapp reported that their multipath fault injection test under heavy
   IO load can stall longer than 300 seconds.
2) IBM reported elevated lock contention in their testbed (likely due to
   increased back pressure due to IO not being dispatched as quickly):
   https://www.redhat.com/archives/dm-devel/2015-July/msg00057.html

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # 4.1+
drivers/md/dm.c

index f331d88..de70377 100644 (file)
@@ -1067,13 +1067,10 @@ static void rq_end_stats(struct mapped_device *md, struct request *orig)
  */
 static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
 {
-       int nr_requests_pending;
-
        atomic_dec(&md->pending[rw]);
 
        /* nudge anyone waiting on suspend queue */
-       nr_requests_pending = md_in_flight(md);
-       if (!nr_requests_pending)
+       if (!md_in_flight(md))
                wake_up(&md->wait);
 
        /*
@@ -1085,8 +1082,7 @@ static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
        if (run_queue) {
                if (md->queue->mq_ops)
                        blk_mq_run_hw_queues(md->queue, true);
-               else if (!nr_requests_pending ||
-                        (nr_requests_pending >= md->queue->nr_congestion_on))
+               else
                        blk_run_queue_async(md->queue);
        }