arm64: insn: avoid virt_to_page() translations on core kernel symbols
authorArd Biesheuvel <ard.biesheuvel@linaro.org>
Wed, 30 Mar 2016 14:45:59 +0000 (16:45 +0200)
committerWill Deacon <will.deacon@arm.com>
Thu, 14 Apr 2016 15:31:49 +0000 (16:31 +0100)
Before restricting virt_to_page() to the linear mapping, ensure that
the text patching code does not use it to resolve references into the
core kernel text, which is mapped in the vmalloc area.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arch/arm64/kernel/insn.c

index 7371455..368c082 100644 (file)
@@ -96,7 +96,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
        if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
                page = vmalloc_to_page(addr);
        else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
-               page = virt_to_page(addr);
+               page = pfn_to_page(PHYS_PFN(__pa(addr)));
        else
                return addr;