stop_machine: Remove cpu_stop_work's from list in cpu_stop_park()
authorOleg Nesterov <oleg@redhat.com>
Tue, 30 Jun 2015 01:29:58 +0000 (03:29 +0200)
committerIngo Molnar <mingo@kernel.org>
Mon, 3 Aug 2015 10:21:28 +0000 (12:21 +0200)
cpu_stop_park() does cpu_stop_signal_done() but leaves the work on
stopper->works. The owner of this work can free/reuse this memory
right after that and corrupt the list, so if this CPU becomes online
again cpu_stopper_thread() will crash.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dave@stgolabs.net
Cc: der.herr@hofr.at
Cc: paulmck@linux.vnet.ibm.com
Cc: riel@redhat.com
Cc: viro@ZenIV.linux.org.uk
Link: http://lkml.kernel.org/r/20150630012958.GA23944@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/stop_machine.c

index 9a70def..12484e5 100644 (file)
@@ -462,13 +462,15 @@ static void cpu_stop_create(unsigned int cpu)
 static void cpu_stop_park(unsigned int cpu)
 {
        struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
-       struct cpu_stop_work *work;
+       struct cpu_stop_work *work, *tmp;
        unsigned long flags;
 
        /* drain remaining works */
        spin_lock_irqsave(&stopper->lock, flags);
-       list_for_each_entry(work, &stopper->works, list)
+       list_for_each_entry_safe(work, tmp, &stopper->works, list) {
+               list_del_init(&work->list);
                cpu_stop_signal_done(work->done, false);
+       }
        stopper->enabled = false;
        spin_unlock_irqrestore(&stopper->lock, flags);
 }