Reorganize the Meltdown entry and exit trampolines for syscall and
authorguenther <guenther@openbsd.org>
Thu, 12 Jul 2018 14:11:11 +0000 (14:11 +0000)
committerguenther <guenther@openbsd.org>
Thu, 12 Jul 2018 14:11:11 +0000 (14:11 +0000)
commit1fc8fad1ef00427ff55d700df3f3dfdb82455f63
tree0e357d177b40c5a738fc261b53ca00dc615b2651
parent4a4b5bde84567027bf4f5bd3700ac1fdd1893da2
Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed.  This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability.  The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@
sys/arch/amd64/amd64/cpu.c
sys/arch/amd64/amd64/identcpu.c
sys/arch/amd64/amd64/locore.S
sys/arch/amd64/amd64/machdep.c
sys/arch/amd64/amd64/pmap.c
sys/arch/amd64/amd64/vector.S
sys/arch/amd64/conf/ld.script
sys/arch/amd64/include/asm.h
sys/arch/amd64/include/codepatch.h