commit ffe8cffc8be1ae47c08cbc3571bed6b5b0fa53ad Author: Greg Kroah-Hartman Date: Tue May 14 19:19:42 2019 +0200 Linux 4.9.176 commit 192d1975450e51c1abb725343a7e19a4d61e30bd Author: Andi Kleen Date: Fri Mar 29 17:47:43 2019 -0700 x86/cpu/bugs: Use __initconst for 'const' init data commit 1de7edbb59c8f1b46071f66c5c97b8a59569eb51 upstream. Some of the recently added const tables use __initdata which causes section attribute conflicts. Use __initconst instead. Fixes: fa1202ef2243 ("x86/speculation: Add command line control") Signed-off-by: Andi Kleen Signed-off-by: Thomas Gleixner Link: https://lkml.kernel.org/r/20190330004743.29541-9-andi@firstfloor.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 626743f43da44598076019a82193caf49dca1fde Author: Nicolas Dichtel Date: Mon Mar 27 14:20:08 2017 +0200 x86: stop exporting msr-index.h to userland commit 25dc1d6cc3082aab293e5dad47623b550f7ddd2a upstream. Even if this file was not in an uapi directory, it was exported because it was listed in the Kbuild file. Fixes: b72e7464e4cf ("x86/uapi: Do not export as part of the user API headers") Suggested-by: Borislav Petkov Signed-off-by: Nicolas Dichtel Acked-by: Ingo Molnar Acked-by: Thomas Gleixner Signed-off-by: Masahiro Yamada Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 2a099011de8abebac475a90dad1835c60dfca88c Author: Josh Poimboeuf Date: Tue May 7 15:05:22 2019 -0500 x86/speculation/mds: Fix documentation typo commit 95310e348a321b45fb746c176961d4da72344282 upstream. Fix a minor typo in the MDS documentation: "eanbled" -> "enabled". Reported-by: Jeff Bastian Signed-off-by: Josh Poimboeuf Signed-off-by: Thomas Gleixner Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit da360f1f5eb43e0d71009bab3be53c7a06d40caf Author: Tyler Hicks Date: Mon May 6 23:52:58 2019 +0000 Documentation: Correct the possible MDS sysfs values commit ea01668f9f43021b28b3f4d5ffad50106a1e1301 upstream. Adjust the last two rows in the table that display possible values when MDS mitigation is enabled. They both were slightly innacurate. In addition, convert the table of possible values and their descriptions to a list-table. The simple table format uses the top border of equals signs to determine cell width which resulted in the first column being far too wide in comparison to the second column that contained the majority of the text. Signed-off-by: Tyler Hicks Signed-off-by: Thomas Gleixner [bwh: Backported to 4.9: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 96c06cda5b4bdc6a3a9a8f8adc46c86077a70ee0 Author: speck for Pawan Gupta Date: Mon May 6 12:23:50 2019 -0700 x86/mds: Add MDSUM variant to the MDS documentation commit e672f8bf71c66253197e503f75c771dd28ada4a0 upstream. Updated the documentation for a new CVE-2019-11091 Microarchitectural Data Sampling Uncacheable Memory (MDSUM) which is a variant of Microarchitectural Data Sampling (MDS). MDS is a family of side channel attacks on internal buffers in Intel CPUs. MDSUM is a special case of MSBDS, MFBDS and MLPDS. An uncacheable load from memory that takes a fault or assist can leave data in a microarchitectural structure that may later be observed using one of the same methods used by MSBDS, MFBDS or MLPDS. There are no new code changes expected for MDSUM. The existing mitigation for MDS applies to MDSUM as well. Signed-off-by: Pawan Gupta Signed-off-by: Thomas Gleixner Reviewed-by: Tyler Hicks Reviewed-by: Jon Masters [bwh: Backported to 4.9: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 025b9cf2a0fcaf8d971b8bea66f661cf3751c245 Author: Josh Poimboeuf Date: Wed Apr 17 16:39:02 2019 -0500 x86/speculation/mds: Add 'mitigations=' support for MDS commit 5c14068f87d04adc73ba3f41c2a303d3c3d1fa12 upstream. Add MDS to the new 'mitigations=' cmdline option. Signed-off-by: Josh Poimboeuf Signed-off-by: Thomas Gleixner Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 1709284f082fbcb4a8e410242dcec3cc68389cda Author: Josh Poimboeuf Date: Fri Apr 12 15:39:29 2019 -0500 x86/speculation: Support 'mitigations=' cmdline option commit d68be4c4d31295ff6ae34a8ddfaa4c1a8ff42812 upstream. Configure x86 runtime CPU speculation bug mitigations in accordance with the 'mitigations=' cmdline option. This affects Meltdown, Spectre v2, Speculative Store Bypass, and L1TF. The default behavior is unchanged. Signed-off-by: Josh Poimboeuf Signed-off-by: Thomas Gleixner Tested-by: Jiri Kosina (on x86) Reviewed-by: Jiri Kosina Cc: Borislav Petkov Cc: "H . Peter Anvin" Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Jiri Kosina Cc: Waiman Long Cc: Andrea Arcangeli Cc: Jon Masters Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: linux-s390@vger.kernel.org Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org Cc: Greg Kroah-Hartman Cc: Tyler Hicks Cc: Linus Torvalds Cc: Randy Dunlap Cc: Steven Price Cc: Phil Auld Link: https://lkml.kernel.org/r/6616d0ae169308516cfdf5216bedd169f8a8291b.1555085500.git.jpoimboe@redhat.com [bwh: Backported to 4.9: adjust filenames, context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit edda9c38930f5088a740952d5181bc1aa443e63c Author: Josh Poimboeuf Date: Fri Apr 12 15:39:28 2019 -0500 cpu/speculation: Add 'mitigations=' cmdline option commit 98af8452945c55652de68536afdde3b520fec429 upstream. Keeping track of the number of mitigations for all the CPU speculation bugs has become overwhelming for many users. It's getting more and more complicated to decide which mitigations are needed for a given architecture. Complicating matters is the fact that each arch tends to have its own custom way to mitigate the same vulnerability. Most users fall into a few basic categories: a) they want all mitigations off; b) they want all reasonable mitigations on, with SMT enabled even if it's vulnerable; or c) they want all reasonable mitigations on, with SMT disabled if vulnerable. Define a set of curated, arch-independent options, each of which is an aggregation of existing options: - mitigations=off: Disable all mitigations. - mitigations=auto: [default] Enable all the default mitigations, but leave SMT enabled, even if it's vulnerable. - mitigations=auto,nosmt: Enable all the default mitigations, disabling SMT if needed by a mitigation. Currently, these options are placeholders which don't actually do anything. They will be fleshed out in upcoming patches. Signed-off-by: Josh Poimboeuf Signed-off-by: Thomas Gleixner Tested-by: Jiri Kosina (on x86) Reviewed-by: Jiri Kosina Cc: Borislav Petkov Cc: "H . Peter Anvin" Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Jiri Kosina Cc: Waiman Long Cc: Andrea Arcangeli Cc: Jon Masters Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: linux-s390@vger.kernel.org Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org Cc: Greg Kroah-Hartman Cc: Tyler Hicks Cc: Linus Torvalds Cc: Randy Dunlap Cc: Steven Price Cc: Phil Auld Link: https://lkml.kernel.org/r/b07a8ef9b7c5055c3a4637c87d07c296d5016fe0.1555085500.git.jpoimboe@redhat.com [bwh: Backported to 4.9: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 3645b361be489077bd85458c40e47be791ca318c Author: Konrad Rzeszutek Wilk Date: Fri Apr 12 17:50:58 2019 -0400 x86/speculation/mds: Print SMT vulnerable on MSBDS with mitigations off commit e2c3c94788b08891dcf3dbe608f9880523ecd71b upstream. This code is only for CPUs which are affected by MSBDS, but are *not* affected by the other two MDS issues. For such CPUs, enabling the mds_idle_clear mitigation is enough to mitigate SMT. However if user boots with 'mds=off' and still has SMT enabled, we should not report that SMT is mitigated: $cat /sys//devices/system/cpu/vulnerabilities/mds Vulnerable; SMT mitigated But rather: Vulnerable; SMT vulnerable Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Thomas Gleixner Reviewed-by: Tyler Hicks Reviewed-by: Josh Poimboeuf Link: https://lkml.kernel.org/r/20190412215118.294906495@localhost.localdomain Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 450aa01a076d9aa5b459a7a33c74d95eca6a1e37 Author: Boris Ostrovsky Date: Fri Apr 12 17:50:57 2019 -0400 x86/speculation/mds: Fix comment commit cae5ec342645746d617dd420d206e1588d47768a upstream. s/L1TF/MDS/ Signed-off-by: Boris Ostrovsky Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Thomas Gleixner Reviewed-by: Tyler Hicks Reviewed-by: Josh Poimboeuf Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit f8a0bbe4bac879c0caf47ca699925ab29a4a9375 Author: Josh Poimboeuf Date: Tue Apr 2 10:00:51 2019 -0500 x86/speculation/mds: Add SMT warning message commit 39226ef02bfb43248b7db12a4fdccb39d95318e3 upstream. MDS is vulnerable with SMT. Make that clear with a one-time printk whenever SMT first gets enabled. Signed-off-by: Josh Poimboeuf Signed-off-by: Thomas Gleixner Reviewed-by: Tyler Hicks Acked-by: Jiri Kosina Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 98c4b3c2ee37ca65d72d23243b621006b69158fd Author: Josh Poimboeuf Date: Tue Apr 2 10:00:14 2019 -0500 x86/speculation: Move arch_smt_update() call to after mitigation decisions commit 7c3658b20194a5b3209a143f63bc9c643c6a3ae2 upstream. arch_smt_update() now has a dependency on both Spectre v2 and MDS mitigations. Move its initial call to after all the mitigation decisions have been made. Signed-off-by: Josh Poimboeuf Signed-off-by: Thomas Gleixner Reviewed-by: Tyler Hicks Acked-by: Jiri Kosina Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit f02eee68e2fc2ded5d620684599826d10392d055 Author: Josh Poimboeuf Date: Tue Apr 2 09:59:33 2019 -0500 x86/speculation/mds: Add mds=full,nosmt cmdline option commit d71eb0ce109a124b0fa714832823b9452f2762cf upstream. Add the mds=full,nosmt cmdline option. This is like mds=full, but with SMT disabled if the CPU is vulnerable. Signed-off-by: Josh Poimboeuf Signed-off-by: Thomas Gleixner Reviewed-by: Tyler Hicks Acked-by: Jiri Kosina [bwh: Backported to 4.9: adjust filenames] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 3880bc168f2188b7e039a9b16a13dbff7b80d462 Author: Thomas Gleixner Date: Tue Feb 19 00:02:31 2019 +0100 Documentation: Add MDS vulnerability documentation commit 5999bbe7a6ea3c62029532ec84dc06003a1fa258 upstream. Add the initial MDS vulnerability documentation. Signed-off-by: Thomas Gleixner Reviewed-by: Jon Masters [bwh: Backported to 4.9: adjust filenames] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit cb106035bd0f0f43c78a29a56c270e1df0e75c24 Author: Thomas Gleixner Date: Tue Feb 19 11:10:49 2019 +0100 Documentation: Move L1TF to separate directory commit 65fd4cb65b2dad97feb8330b6690445910b56d6a upstream. Move L!TF to a separate directory so the MDS stuff can be added at the side. Otherwise the all hardware vulnerabilites have their own top level entry. Should have done that right away. Signed-off-by: Thomas Gleixner Reviewed-by: Greg Kroah-Hartman Reviewed-by: Jon Masters [bwh: Backported to 4.9: adjust filenames, context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 81ea109a9b1265e715c1ce5b45f6d0ed99b9f482 Author: Thomas Gleixner Date: Wed Feb 20 09:40:40 2019 +0100 x86/speculation/mds: Add mitigation mode VMWERV commit 22dd8365088b6403630b82423cf906491859b65e upstream. In virtualized environments it can happen that the host has the microcode update which utilizes the VERW instruction to clear CPU buffers, but the hypervisor is not yet updated to expose the X86_FEATURE_MD_CLEAR CPUID bit to guests. Introduce an internal mitigation mode VMWERV which enables the invocation of the CPU buffer clearing even if X86_FEATURE_MD_CLEAR is not set. If the system has no updated microcode this results in a pointless execution of the VERW instruction wasting a few CPU cycles. If the microcode is updated, but not exposed to a guest then the CPU buffers will be cleared. That said: Virtual Machines Will Eventually Receive Vaccine Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov Reviewed-by: Jon Masters Tested-by: Jon Masters Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit ba08d562b066f044e2985ece32b7890f556ee5ed Author: Thomas Gleixner Date: Mon Feb 18 22:51:43 2019 +0100 x86/speculation/mds: Add sysfs reporting for MDS commit 8a4b06d391b0a42a373808979b5028f5c84d9c6a upstream. Add the sysfs reporting file for MDS. It exposes the vulnerability and mitigation state similar to the existing files for the other speculative hardware vulnerabilities. Signed-off-by: Thomas Gleixner Reviewed-by: Greg Kroah-Hartman Reviewed-by: Borislav Petkov Reviewed-by: Jon Masters Tested-by: Jon Masters [bwh: Backported to 4.9: test x86_hyper instead of using hypervisor_is_type()] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 4e722ae3141fc6aebadc722b3b10720e2ffd866f Author: Thomas Gleixner Date: Mon Feb 18 22:04:08 2019 +0100 x86/speculation/mds: Add mitigation control for MDS commit bc1241700acd82ec69fde98c5763ce51086269f8 upstream. Now that the mitigations are in place, add a command line parameter to control the mitigation, a mitigation selector function and a SMT update mechanism. This is the minimal straight forward initial implementation which just provides an always on/off mode. The command line parameter is: mds=[full|off] This is consistent with the existing mitigations for other speculative hardware vulnerabilities. The idle invocation is dynamically updated according to the SMT state of the system similar to the dynamic update of the STIBP mitigation. The idle mitigation is limited to CPUs which are only affected by MSBDS and not any other variant, because the other variants cannot be mitigated on SMT enabled systems. Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov Reviewed-by: Jon Masters Tested-by: Jon Masters [bwh: Backported to 4.9: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 2394f5912c223b767be0c4f8365570335110a8c0 Author: Thomas Gleixner Date: Mon Feb 18 23:04:01 2019 +0100 x86/speculation/mds: Conditionally clear CPU buffers on idle entry commit 07f07f55a29cb705e221eda7894dd67ab81ef343 upstream. Add a static key which controls the invocation of the CPU buffer clear mechanism on idle entry. This is independent of other MDS mitigations because the idle entry invocation to mitigate the potential leakage due to store buffer repartitioning is only necessary on SMT systems. Add the actual invocations to the different halt/mwait variants which covers all usage sites. mwaitx is not patched as it's not available on Intel CPUs. The buffer clear is only invoked before entering the C-State to prevent that stale data from the idling CPU is spilled to the Hyper-Thread sibling after the Store buffer got repartitioned and all entries are available to the non idle sibling. When coming out of idle the store buffer is partitioned again so each sibling has half of it available. Now CPU which returned from idle could be speculatively exposed to contents of the sibling, but the buffers are flushed either on exit to user space or on VMENTER. When later on conditional buffer clearing is implemented on top of this, then there is no action required either because before returning to user space the context switch will set the condition flag which causes a flush on the return to user path. Note, that the buffer clearing on idle is only sensible on CPUs which are solely affected by MSBDS and not any other variant of MDS because the other MDS variants cannot be mitigated when SMT is enabled, so the buffer clearing on idle would be a window dressing exercise. This intentionally does not handle the case in the acpi/processor_idle driver which uses the legacy IO port interface for C-State transitions for two reasons: - The acpi/processor_idle driver was replaced by the intel_idle driver almost a decade ago. Anything Nehalem upwards supports it and defaults to that new driver. - The legacy IO port interface is likely to be used on older and therefore unaffected CPUs or on systems which do not receive microcode updates anymore, so there is no point in adding that. Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov Reviewed-by: Greg Kroah-Hartman Reviewed-by: Frederic Weisbecker Reviewed-by: Jon Masters Tested-by: Jon Masters Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 3a8e7f6993c8240f6cc8564ff06702512b3b18bb Author: Thomas Gleixner Date: Wed Feb 27 12:48:14 2019 +0100 x86/kvm/vmx: Add MDS protection when L1D Flush is not active commit 650b68a0622f933444a6d66936abb3103029413b upstream. CPUs which are affected by L1TF and MDS mitigate MDS with the L1D Flush on VMENTER when updated microcode is installed. If a CPU is not affected by L1TF or if the L1D Flush is not in use, then MDS mitigation needs to be invoked explicitly. For these cases, follow the host mitigation state and invoke the MDS mitigation before VMENTER. Signed-off-by: Thomas Gleixner Reviewed-by: Greg Kroah-Hartman Reviewed-by: Frederic Weisbecker Reviewed-by: Borislav Petkov Reviewed-by: Jon Masters Tested-by: Jon Masters Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 20041a0ebf3f9d99db3a8ffd81a679b925cb9fe4 Author: Thomas Gleixner Date: Mon Feb 18 23:42:51 2019 +0100 x86/speculation/mds: Clear CPU buffers on exit to user commit 04dcbdb8057827b043b3c71aa397c4c63e67d086 upstream. Add a static key which controls the invocation of the CPU buffer clear mechanism on exit to user space and add the call into prepare_exit_to_usermode() and do_nmi() right before actually returning. Add documentation which kernel to user space transition this covers and explain why some corner cases are not mitigated. Signed-off-by: Thomas Gleixner Reviewed-by: Greg Kroah-Hartman Reviewed-by: Borislav Petkov Reviewed-by: Frederic Weisbecker Reviewed-by: Jon Masters Tested-by: Jon Masters Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 96ef7afd8c38c88419d1bd85f6cc25c3aa403224 Author: Thomas Gleixner Date: Mon Feb 18 23:13:06 2019 +0100 x86/speculation/mds: Add mds_clear_cpu_buffers() commit 6a9e529272517755904b7afa639f6db59ddb793e upstream. The Microarchitectural Data Sampling (MDS) vulernabilities are mitigated by clearing the affected CPU buffers. The mechanism for clearing the buffers uses the unused and obsolete VERW instruction in combination with a microcode update which triggers a CPU buffer clear when VERW is executed. Provide a inline function with the assembly magic. The argument of the VERW instruction must be a memory operand as documented: "MD_CLEAR enumerates that the memory-operand variant of VERW (for example, VERW m16) has been extended to also overwrite buffers affected by MDS. This buffer overwriting functionality is not guaranteed for the register operand variant of VERW." Documentation also recommends to use a writable data segment selector: "The buffer overwriting occurs regardless of the result of the VERW permission check, as well as when the selector is null or causes a descriptor load segment violation. However, for lowest latency we recommend using a selector that indicates a valid writable data segment." Add x86 specific documentation about MDS and the internal workings of the mitigation. Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov Reviewed-by: Greg Kroah-Hartman Reviewed-by: Frederic Weisbecker Reviewed-by: Jon Masters Tested-by: Jon Masters [bwh: Backported to 4.9: add the "Architecture-specific documentation" section to the index] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit eb2aa332cfe39e05585534017ad94b7717dbdf85 Author: Andi Kleen Date: Fri Jan 18 16:50:23 2019 -0800 x86/kvm: Expose X86_FEATURE_MD_CLEAR to guests commit 6c4dbbd14730c43f4ed808a9c42ca41625925c22 upstream. X86_FEATURE_MD_CLEAR is a new CPUID bit which is set when microcode provides the mechanism to invoke a flush of various exploitable CPU buffers by invoking the VERW instruction. Hand it through to guests so they can adjust their mitigations. This also requires corresponding qemu changes, which are available separately. [ tglx: Massaged changelog ] Signed-off-by: Andi Kleen Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov Reviewed-by: Greg Kroah-Hartman Reviewed-by: Frederic Weisbecker Reviewed-by: Jon Masters Tested-by: Jon Masters [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 1cdffecc34ba5d5af61b456fb0f46abbb3a86816 Author: Thomas Gleixner Date: Fri Mar 1 20:21:08 2019 +0100 x86/speculation/mds: Add BUG_MSBDS_ONLY commit e261f209c3666e842fd645a1e31f001c3a26def9 upstream. This bug bit is set on CPUs which are only affected by Microarchitectural Store Buffer Data Sampling (MSBDS) and not by any other MDS variant. This is important because the Store Buffers are partitioned between Hyper-Threads so cross thread forwarding is not possible. But if a thread enters or exits a sleep state the store buffer is repartitioned which can expose data from one thread to the other. This transition can be mitigated. That means that for CPUs which are only affected by MSBDS SMT can be enabled, if the CPU is not affected by other SMT sensitive vulnerabilities, e.g. L1TF. The XEON PHI variants fall into that category. Also the Silvermont/Airmont ATOMs, but for them it's not really relevant as they do not support SMT, but mark them for completeness sake. Signed-off-by: Thomas Gleixner Reviewed-by: Frederic Weisbecker Reviewed-by: Jon Masters Tested-by: Jon Masters [bwh: Backported to 4.9: adjust context, indentation] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit fbf6ad08fd9ba697f6d127dbf089739fecbd433e Author: Andi Kleen Date: Fri Jan 18 16:50:16 2019 -0800 x86/speculation/mds: Add basic bug infrastructure for MDS commit ed5194c2732c8084af9fd159c146ea92bf137128 upstream. Microarchitectural Data Sampling (MDS), is a class of side channel attacks on internal buffers in Intel CPUs. The variants are: - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126) - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130) - Microarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127) MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a dependent load (store-to-load forwarding) as an optimization. The forward can also happen to a faulting or assisting load operation for a different memory address, which can be exploited under certain conditions. Store buffers are partitioned between Hyper-Threads so cross thread forwarding is not possible. But if a thread enters or exits a sleep state the store buffer is repartitioned which can expose data from one thread to the other. MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage L1 miss situations and to hold data which is returned or sent in response to a memory or I/O operation. Fill buffers can forward data to a load operation and also write data to the cache. When the fill buffer is deallocated it can retain the stale data of the preceding operations which can then be forwarded to a faulting or assisting load operation, which can be exploited under certain conditions. Fill buffers are shared between Hyper-Threads so cross thread leakage is possible. MLDPS leaks Load Port Data. Load ports are used to perform load operations from memory or I/O. The received data is then forwarded to the register file or a subsequent operation. In some implementations the Load Port can contain stale data from a previous operation which can be forwarded to faulting or assisting loads under certain conditions, which again can be exploited eventually. Load ports are shared between Hyper-Threads so cross thread leakage is possible. All variants have the same mitigation for single CPU thread case (SMT off), so the kernel can treat them as one MDS issue. Add the basic infrastructure to detect if the current CPU is affected by MDS. [ tglx: Rewrote changelog ] Signed-off-by: Andi Kleen Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov Reviewed-by: Greg Kroah-Hartman Reviewed-by: Frederic Weisbecker Reviewed-by: Jon Masters Tested-by: Jon Masters [bwh: Backported to 4.9: adjust context, indentation] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit d5272d01ef727181cbc36292bc02425e6993ef5b Author: Thomas Gleixner Date: Wed Feb 27 10:10:23 2019 +0100 x86/speculation: Consolidate CPU whitelists commit 36ad35131adacc29b328b9c8b6277a8bf0d6fd5d upstream. The CPU vulnerability whitelists have some overlap and there are more whitelists coming along. Use the driver_data field in the x86_cpu_id struct to denote the whitelisted vulnerabilities and combine all whitelists into one. Suggested-by: Linus Torvalds Signed-off-by: Thomas Gleixner Reviewed-by: Frederic Weisbecker Reviewed-by: Greg Kroah-Hartman Reviewed-by: Borislav Petkov Reviewed-by: Jon Masters Tested-by: Jon Masters Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit b76f8af91206a12a68773d3c86f3f343d611deb0 Author: Thomas Gleixner Date: Thu Feb 21 12:36:50 2019 +0100 x86/msr-index: Cleanup bit defines commit d8eabc37310a92df40d07c5a8afc53cebf996716 upstream. Greg pointed out that speculation related bit defines are using (1 << N) format instead of BIT(N). Aside of that (1 << N) is wrong as it should use 1UL at least. Clean it up. [ Josh Poimboeuf: Fix tools build ] Reported-by: Greg Kroah-Hartman Signed-off-by: Thomas Gleixner Reviewed-by: Greg Kroah-Hartman Reviewed-by: Borislav Petkov Reviewed-by: Frederic Weisbecker Reviewed-by: Jon Masters Tested-by: Jon Masters [bwh: Backported to 4.9: Drop change to x86_energy_perf_policy, which doesn't use msr-index.h here] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 6198041f012eda83d6cfb28912db6061ee2702b4 Author: Eduardo Habkost Date: Wed Dec 5 17:19:56 2018 -0200 kvm: x86: Report STIBP on GET_SUPPORTED_CPUID commit d7b09c827a6cf291f66637a36f46928dd1423184 upstream. Months ago, we have added code to allow direct access to MSR_IA32_SPEC_CTRL to the guest, which makes STIBP available to guests. This was implemented by commits d28b387fb74d ("KVM/VMX: Allow direct access to MSR_IA32_SPEC_CTRL") and b2ac58f90540 ("KVM/SVM: Allow direct access to MSR_IA32_SPEC_CTRL"). However, we never updated GET_SUPPORTED_CPUID to let userspace know that STIBP can be enabled in CPUID. Fix that by updating kvm_cpuid_8000_0008_ebx_x86_features and kvm_cpuid_7_0_edx_x86_features. Signed-off-by: Eduardo Habkost Reviewed-by: Jim Mattson Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: Paolo Bonzini Signed-off-by: Thomas Gleixner [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit e58cf37a3c2e102af3f28a7bd24bc5aa03c75564 Author: Thomas Gleixner Date: Sun Nov 25 19:33:56 2018 +0100 x86/speculation: Provide IBPB always command line options commit 55a974021ec952ee460dc31ca08722158639de72 upstream. Provide the possibility to enable IBPB always in combination with 'prctl' and 'seccomp'. Add the extra command line options and rework the IBPB selection to evaluate the command instead of the mode selected by the STIPB switch case. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185006.144047038@linutronix.de [bwh: Backported to 4.9: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 6f4b925ec2943ee6054658eb06fa7a68927486a9 Author: Thomas Gleixner Date: Sun Nov 25 19:33:55 2018 +0100 x86/speculation: Add seccomp Spectre v2 user space protection mode commit 6b3e64c237c072797a9ec918654a60e3a46488e2 upstream. If 'prctl' mode of user space protection from spectre v2 is selected on the kernel command-line, STIBP and IBPB are applied on tasks which restrict their indirect branch speculation via prctl. SECCOMP enables the SSBD mitigation for sandboxed tasks already, so it makes sense to prevent spectre v2 user space to user space attacks as well. The Intel mitigation guide documents how STIPB works: Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor prevents the predicted targets of indirect branches on any logical processor of that core from being controlled by software that executes (or executed previously) on another logical processor of the same core. Ergo setting STIBP protects the task itself from being attacked from a task running on a different hyper-thread and protects the tasks running on different hyper-threads from being attacked. While the document suggests that the branch predictors are shielded between the logical processors, the observed performance regressions suggest that STIBP simply disables the branch predictor more or less completely. Of course the document wording is vague, but the fact that there is also no requirement for issuing IBPB when STIBP is used points clearly in that direction. The kernel still issues IBPB even when STIBP is used until Intel clarifies the whole mechanism. IBPB is issued when the task switches out, so malicious sandbox code cannot mistrain the branch predictor for the next user space task on the same logical processor. Signed-off-by: Jiri Kosina Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185006.051663132@linutronix.de [bwh: Backported to 4.9: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 91d9bbd3e4bdb0494ba1d2922646cabb6b8e6e2b Author: Thomas Gleixner Date: Sun Nov 25 19:33:54 2018 +0100 x86/speculation: Enable prctl mode for spectre_v2_user commit 7cc765a67d8e04ef7d772425ca5a2a1e2b894c15 upstream. Now that all prerequisites are in place: - Add the prctl command line option - Default the 'auto' mode to 'prctl' - When SMT state changes, update the static key which controls the conditional STIBP evaluation on context switch. - At init update the static key which controls the conditional IBPB evaluation on context switch. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185005.958421388@linutronix.de [bwh: Backported to 4.9: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 2d99bc055e458eaaf78e4901e78961546eecf5f4 Author: Thomas Gleixner Date: Sun Nov 25 19:33:53 2018 +0100 x86/speculation: Add prctl() control for indirect branch speculation commit 9137bb27e60e554dab694eafa4cca241fa3a694f upstream. Add the PR_SPEC_INDIRECT_BRANCH option for the PR_GET_SPECULATION_CTRL and PR_SET_SPECULATION_CTRL prctls to allow fine grained per task control of indirect branch speculation via STIBP and IBPB. Invocations: Check indirect branch speculation status with - prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, 0, 0, 0); Enable indirect branch speculation with - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_ENABLE, 0, 0); Disable indirect branch speculation with - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_DISABLE, 0, 0); Force disable indirect branch speculation with - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0); See Documentation/userspace-api/spec_ctrl.rst. Signed-off-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185005.866780996@linutronix.de [bwh: Backported to 4.9: - Renumber the PFA flags - Drop changes in tools/include/uapi/linux/prctl.h - Adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 6febf94d190c1cf977247fe4519a01f0828b68ca Author: Thomas Gleixner Date: Wed Nov 28 10:56:57 2018 +0100 x86/speculation: Prevent stale SPEC_CTRL msr content commit 6d991ba509ebcfcc908e009d1db51972a4f7a064 upstream. The seccomp speculation control operates on all tasks of a process, but only the current task of a process can update the MSR immediately. For the other threads the update is deferred to the next context switch. This creates the following situation with Process A and B: Process A task 2 and Process B task 1 are pinned on CPU1. Process A task 2 does not have the speculation control TIF bit set. Process B task 1 has the speculation control TIF bit set. CPU0 CPU1 MSR bit is set ProcB.T1 schedules out ProcA.T2 schedules in MSR bit is cleared ProcA.T1 seccomp_update() set TIF bit on ProcA.T2 ProcB.T1 schedules in MSR is not updated <-- FAIL This happens because the context switch code tries to avoid the MSR update if the speculation control TIF bits of the incoming and the outgoing task are the same. In the worst case ProcB.T1 and ProcA.T2 are the only tasks scheduling back and forth on CPU1, which keeps the MSR stale forever. In theory this could be remedied by IPIs, but chasing the remote task which could be migrated is complex and full of races. The straight forward solution is to avoid the asychronous update of the TIF bit and defer it to the next context switch. The speculation control state is stored in task_struct::atomic_flags by the prctl and seccomp updates already. Add a new TIF_SPEC_FORCE_UPDATE bit and set this after updating the atomic_flags. Check the bit on context switch and force a synchronous update of the speculation control if set. Use the same mechanism for updating the current task. Reported-by: Tim Chen Signed-off-by: Thomas Gleixner Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1811272247140.1875@nanos.tec.linutronix.de [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 6596ca955bf6d04fe2961215f22f84c13ca7217f Author: Thomas Gleixner Date: Sun Nov 25 19:33:52 2018 +0100 x86/speculation: Prepare arch_smt_update() for PRCTL mode commit 6893a959d7fdebbab5f5aa112c277d5a44435ba1 upstream. The upcoming fine grained per task STIBP control needs to be updated on CPU hotplug as well. Split out the code which controls the strict mode so the prctl control code can be added later. Mark the SMP function call argument __unused while at it. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185005.759457117@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 607a3b3bbd5ba62a3d004e92d2149e040086c498 Author: Thomas Gleixner Date: Sun Nov 25 19:33:51 2018 +0100 x86/speculation: Split out TIF update commit e6da8bb6f9abb2628381904b24163c770e630bac upstream. The update of the TIF_SSBD flag and the conditional speculation control MSR update is done in the ssb_prctl_set() function directly. The upcoming prctl support for controlling indirect branch speculation via STIBP needs the same mechanism. Split the code out and make it reusable. Reword the comment about updates for other tasks. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185005.652305076@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit c89ef65578170416a225d2b3a6c7299a8d0bcf7c Author: Thomas Gleixner Date: Sun Nov 25 19:33:49 2018 +0100 x86/speculation: Prepare for conditional IBPB in switch_mm() commit 4c71a2b6fd7e42814aa68a6dec88abf3b42ea573 upstream. The IBPB speculation barrier is issued from switch_mm() when the kernel switches to a user space task with a different mm than the user space task which ran last on the same CPU. An additional optimization is to avoid IBPB when the incoming task can be ptraced by the outgoing task. This optimization only works when switching directly between two user space tasks. When switching from a kernel task to a user space task the optimization fails because the previous task cannot be accessed anymore. So for quite some scenarios the optimization is just adding overhead. The upcoming conditional IBPB support will issue IBPB only for user space tasks which have the TIF_SPEC_IB bit set. This requires to handle the following cases: 1) Switch from a user space task (potential attacker) which has TIF_SPEC_IB set to a user space task (potential victim) which has TIF_SPEC_IB not set. 2) Switch from a user space task (potential attacker) which has TIF_SPEC_IB not set to a user space task (potential victim) which has TIF_SPEC_IB set. This needs to be optimized for the case where the IBPB can be avoided when only kernel threads ran in between user space tasks which belong to the same process. The current check whether two tasks belong to the same context is using the tasks context id. While correct, it's simpler to use the mm pointer because it allows to mangle the TIF_SPEC_IB bit into it. The context id based mechanism requires extra storage, which creates worse code. When a task is scheduled out its TIF_SPEC_IB bit is mangled as bit 0 into the per CPU storage which is used to track the last user space mm which was running on a CPU. This bit can be used together with the TIF_SPEC_IB bit of the incoming task to make the decision whether IBPB needs to be issued or not to cover the two cases above. As conditional IBPB is going to be the default, remove the dubious ptrace check for the IBPB always case and simply issue IBPB always when the process changes. Move the storage to a different place in the struct as the original one created a hole. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185005.466447057@linutronix.de [bwh: Backported to 4.9: - Drop changes in initialize_tlbstate_and_flush() - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 1cca4d2637791c2bcefc86c532339cf2918023d7 Author: Thomas Gleixner Date: Sun Nov 25 19:33:48 2018 +0100 x86/speculation: Avoid __switch_to_xtra() calls commit 5635d99953f04b550738f6f4c1c532667c3fd872 upstream. The TIF_SPEC_IB bit does not need to be evaluated in the decision to invoke __switch_to_xtra() when: - CONFIG_SMP is disabled - The conditional STIPB mode is disabled The TIF_SPEC_IB bit still controls IBPB in both cases so the TIF work mask checks might invoke __switch_to_xtra() for nothing if TIF_SPEC_IB is the only set bit in the work masks. Optimize it out by masking the bit at compile time for CONFIG_SMP=n and at run time when the static key controlling the conditional STIBP mode is disabled. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185005.374062201@linutronix.de [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit b5741ef7591dad04afd67b3ea14265847033a652 Author: Thomas Gleixner Date: Sun Nov 25 19:33:47 2018 +0100 x86/process: Consolidate and simplify switch_to_xtra() code commit ff16701a29cba3aafa0bd1656d766813b2d0a811 upstream. Move the conditional invocation of __switch_to_xtra() into an inline function so the logic can be shared between 32 and 64 bit. Remove the handthrough of the TSS pointer and retrieve the pointer directly in the bitmap handling function. Use this_cpu_ptr() instead of the per_cpu() indirection. This is a preparatory change so integration of conditional indirect branch speculation optimization happens only in one place. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185005.280855518@linutronix.de [bwh: Backported to 4.9: - Use cpu_tss instead of cpu_tss_rw - __switch_to() still uses the tss variable, so don't delete it - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit a35a8c64221afba50d76571d96fc4563c64db81e Author: Tim Chen Date: Sun Nov 25 19:33:46 2018 +0100 x86/speculation: Prepare for per task indirect branch speculation control commit 5bfbe3ad5840d941b89bcac54b821ba14f50a0ba upstream. To avoid the overhead of STIBP always on, it's necessary to allow per task control of STIBP. Add a new task flag TIF_SPEC_IB and evaluate it during context switch if SMT is active and flag evaluation is enabled by the speculation control code. Add the conditional evaluation to x86_virt_spec_ctrl() as well so the guest/host switch works properly. This has no effect because TIF_SPEC_IB cannot be set yet and the static key which controls evaluation is off. Preparatory patch for adding the control code. [ tglx: Simplify the context switch logic and make the TIF evaluation depend on SMP=y and on the static key controlling the conditional update. Rename it to TIF_SPEC_IB because it controls both STIBP and IBPB ] Signed-off-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185005.176917199@linutronix.de [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit dda365c4d0e911c6c63e580c284969069db3c63d Author: Thomas Gleixner Date: Sun Nov 25 19:33:45 2018 +0100 x86/speculation: Add command line control for indirect branch speculation commit fa1202ef224391b6f5b26cdd44cc50495e8fab54 upstream. Add command line control for user space indirect branch speculation mitigations. The new option is: spectre_v2_user= The initial options are: - on: Unconditionally enabled - off: Unconditionally disabled -auto: Kernel selects mitigation (default off for now) When the spectre_v2= command line argument is either 'on' or 'off' this implies that the application to application control follows that state even if a contradicting spectre_v2_user= argument is supplied. Originally-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185005.082720373@linutronix.de [bwh: Backported to 4.9: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit d343a9412cc86aff1a8cbaa90d7b048dc785d0e4 Author: Thomas Gleixner Date: Sun Nov 25 19:33:44 2018 +0100 x86/speculation: Unify conditional spectre v2 print functions commit 495d470e9828500e0155027f230449ac5e29c025 upstream. There is no point in having two functions and a conditional at the call site. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185004.986890749@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit d0737990d2e7ab8c73dad92251207149bdd556bf Author: Thomas Gleixner Date: Sun Nov 25 19:33:43 2018 +0100 x86/speculataion: Mark command line parser data __initdata commit 30ba72a990f5096ae08f284de17986461efcc408 upstream. No point to keep that around. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185004.893886356@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 8d33157c63a01bcdca9ca003f4fa238565a367a1 Author: Thomas Gleixner Date: Sun Nov 25 19:33:42 2018 +0100 x86/speculation: Mark string arrays const correctly commit 8770709f411763884535662744a3786a1806afd3 upstream. checkpatch.pl muttered when reshuffling the code: WARNING: static const char * array should probably be static const char * const Fix up all the string arrays. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185004.800018931@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 5fdb12373d68cd4a31fb33e1ccffe84c5b35f077 Author: Thomas Gleixner Date: Sun Nov 25 19:33:41 2018 +0100 x86/speculation: Reorder the spec_v2 code commit 15d6b7aab0793b2de8a05d8a828777dd24db424e upstream. Reorder the code so it is better grouped. No functional change. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185004.707122879@linutronix.de [bwh: Backported to 4.9: - We still have the minimal mitigation modes - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 9d6f23fae003031ba7aa2696075ad7e70310bd84 Author: Thomas Gleixner Date: Sun Nov 25 19:33:40 2018 +0100 x86/l1tf: Show actual SMT state commit 130d6f946f6f2a972ee3ec8540b7243ab99abe97 upstream. Use the now exposed real SMT state, not the SMT sysfs control knob state. This reflects the state of the system when the mitigation status is queried. This does not change the warning in the VMX launch code. There the dependency on the control knob makes sense because siblings could be brought online anytime after launching the VM. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185004.613357354@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit a3c901bfdb2e37f281cc8087d5a01bb35da64b20 Author: Thomas Gleixner Date: Sun Nov 25 19:33:39 2018 +0100 x86/speculation: Rework SMT state change commit a74cfffb03b73d41e08f84c2e5c87dec0ce3db9f upstream. arch_smt_update() is only called when the sysfs SMT control knob is changed. This means that when SMT is enabled in the sysfs control knob the system is considered to have SMT active even if all siblings are offline. To allow finegrained control of the speculation mitigations, the actual SMT state is more interesting than the fact that siblings could be enabled. Rework the code, so arch_smt_update() is invoked from each individual CPU hotplug function, and simplify the update function while at it. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185004.521974984@linutronix.de [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit c803409910a6e2ca70c3edd557aeda0055827f7a Author: Ben Hutchings Date: Fri May 10 00:46:25 2019 +0100 sched: Add sched_smt_active() Add the sched_smt_active() function needed for some x86 speculation mitigations. This was introduced upstream by commits 1b568f0aabf2 "sched/core: Optimize SCHED_SMT", ba2591a5993e "sched/smt: Update sched_smt_present at runtime", c5511d03ec09 "sched/smt: Make sched_smt_present track topology", and 321a874a7ef8 "sched/smt: Expose sched_smt_present static key". The upstream implementation uses the static_key_{disable,enable}_cpuslocked() functions, which aren't practical to backport. Signed-off-by: Ben Hutchings Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Peter Zijlstra (Intel) Cc: Konrad Rzeszutek Wilk Signed-off-by: Greg Kroah-Hartman commit 4cc154901e47d56e7491d37f7a81768ddb96e733 Author: Thomas Gleixner Date: Sun Nov 25 19:33:37 2018 +0100 x86/Kconfig: Select SCHED_SMT if SMP enabled commit dbe733642e01dd108f71436aaea7b328cb28fd87 upstream. CONFIG_SCHED_SMT is enabled by all distros, so there is not a real point to have it configurable. The runtime overhead in the core scheduler code is minimal because the actual SMT scheduling parts are conditional on a static key. This allows to expose the scheduler's SMT state static key to the speculation control code. Alternatively the scheduler's static key could be made always available when CONFIG_SMP is enabled, but that's just adding an unused static key to every other architecture for nothing. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Tim Chen Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185004.337452245@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit dbbc533a9b4a82c18aba36129bb1513ac90f4bc6 Author: Tim Chen Date: Sun Nov 25 19:33:35 2018 +0100 x86/speculation: Reorganize speculation control MSRs update commit 01daf56875ee0cd50ed496a09b20eb369b45dfa5 upstream. The logic to detect whether there's a change in the previous and next task's flag relevant to update speculation control MSRs is spread out across multiple functions. Consolidate all checks needed for updating speculation control MSRs into the new __speculation_ctrl_update() helper function. This makes it easy to pick the right speculation control MSR and the bits in MSR_IA32_SPEC_CTRL that need updating based on TIF flags changes. Originally-by: Thomas Lendacky Signed-off-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185004.151077005@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit fd8d77ee819fa2a56a26c54b894d664ec677bb6d Author: Thomas Gleixner Date: Sun Nov 25 19:33:34 2018 +0100 x86/speculation: Rename SSBD update functions commit 26c4d75b234040c11728a8acb796b3a85ba7507c upstream. During context switch, the SSBD bit in SPEC_CTRL MSR is updated according to changes of the TIF_SSBD flag in the current and next running task. Currently, only the bit controlling speculative store bypass disable in SPEC_CTRL MSR is updated and the related update functions all have "speculative_store" or "ssb" in their names. For enhanced mitigation control other bits in SPEC_CTRL MSR need to be updated as well, which makes the SSB names inadequate. Rename the "speculative_store*" functions to a more generic name. No functional change. Signed-off-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185004.058866968@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 8a7723de5e1a57c394d885e731af6ebba990f110 Author: Tim Chen Date: Sun Nov 25 19:33:33 2018 +0100 x86/speculation: Disable STIBP when enhanced IBRS is in use commit 34bce7c9690b1d897686aac89604ba7adc365556 upstream. If enhanced IBRS is active, STIBP is redundant for mitigating Spectre v2 user space exploits from hyperthread sibling. Disable STIBP when enhanced IBRS is used. Signed-off-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185003.966801480@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 20ba13aef2628548b515b4dc31ca5a5f2baa9bbd Author: Tim Chen Date: Sun Nov 25 19:33:32 2018 +0100 x86/speculation: Move STIPB/IBPB string conditionals out of cpu_show_common() commit a8f76ae41cd633ac00be1b3019b1eb4741be3828 upstream. The Spectre V2 printout in cpu_show_common() handles conditionals for the various mitigation methods directly in the sprintf() argument list. That's hard to read and will become unreadable if more complex decisions need to be made for a particular method. Move the conditionals for STIBP and IBPB string selection into helper functions, so they can be extended later on. Signed-off-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185003.874479208@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 66c0d89b81a051ac4df051fcf770b5eb7f208200 Author: Tim Chen Date: Sun Nov 25 19:33:31 2018 +0100 x86/speculation: Remove unnecessary ret variable in cpu_show_common() commit b86bda0426853bfe8a3506c7d2a5b332760ae46b upstream. Signed-off-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185003.783903657@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 61549811fcbdfd33e90b86703f97e03af9f6fbdb Author: Tim Chen Date: Sun Nov 25 19:33:30 2018 +0100 x86/speculation: Clean up spectre_v2_parse_cmdline() commit 24848509aa55eac39d524b587b051f4e86df3c12 upstream. Remove the unnecessary 'else' statement in spectre_v2_parse_cmdline() to save an indentation level. Signed-off-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185003.688010903@linutronix.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit e8891b7227dffa563daf514fca3e22f5264b9776 Author: Tim Chen Date: Sun Nov 25 19:33:29 2018 +0100 x86/speculation: Update the TIF_SSBD comment commit 8eb729b77faf83ac4c1f363a9ad68d042415f24c upstream. "Reduced Data Speculation" is an obsolete term. The correct new name is "Speculative store bypass disable" - which is abbreviated into SSBD. Signed-off-by: Tim Chen Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Jiri Kosina Cc: Tom Lendacky Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Andi Kleen Cc: Dave Hansen Cc: Casey Schaufler Cc: Asit Mallick Cc: Arjan van de Ven Cc: Jon Masters Cc: Waiman Long Cc: Greg KH Cc: Dave Stewart Cc: Kees Cook Link: https://lkml.kernel.org/r/20181125185003.593893901@linutronix.de [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit c36925835c8f8e0c9d237fd67b54878e0d3476a9 Author: Michal Hocko Date: Tue Nov 13 19:49:10 2018 +0100 x86/speculation/l1tf: Drop the swap storage limit restriction when l1tf=off commit 5b5e4d623ec8a34689df98e42d038a3b594d2ff9 upstream. Swap storage is restricted to max_swapfile_size (~16TB on x86_64) whenever the system is deemed affected by L1TF vulnerability. Even though the limit is quite high for most deployments it seems to be too restrictive for deployments which are willing to live with the mitigation disabled. We have a customer to deploy 8x 6,4TB PCIe/NVMe SSD swap devices which is clearly out of the limit. Drop the swap restriction when l1tf=off is specified. It also doesn't make much sense to warn about too much memory for the l1tf mitigation when it is forcefully disabled by the administrator. [ tglx: Folded the documentation delta change ] Fixes: 377eeaa8e11f ("x86/speculation/l1tf: Limit swap file size to MAX_PA/2") Signed-off-by: Michal Hocko Signed-off-by: Thomas Gleixner Reviewed-by: Pavel Tatashin Reviewed-by: Andi Kleen Acked-by: Jiri Kosina Cc: Linus Torvalds Cc: Dave Hansen Cc: Andi Kleen Cc: Borislav Petkov Cc: Link: https://lkml.kernel.org/r/20181113184910.26697-1-mhocko@kernel.org [bwh: Backported to 4.9: adjust filenames, context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 787b367ecab5e9e722ddd257bf21a90c370eab95 Author: Jiri Kosina Date: Tue Sep 25 14:39:28 2018 +0200 x86/speculation: Propagate information about RSB filling mitigation to sysfs commit bb4b3b7762735cdaba5a40fd94c9303d9ffa147a upstream. If spectrev2 mitigation has been enabled, RSB is filled on context switch in order to protect from various classes of spectrev2 attacks. If this mitigation is enabled, say so in sysfs for spectrev2. Signed-off-by: Jiri Kosina Signed-off-by: Thomas Gleixner Cc: Peter Zijlstra Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: "WoodhouseDavid" Cc: Andi Kleen Cc: Tim Chen Cc: "SchauflerCasey" Link: https://lkml.kernel.org/r/nycvar.YFH.7.76.1809251438580.15880@cbobk.fhfr.pm Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit b410c57f4907dcf23a29f46f15b081fb404d7f4d Author: Jiri Kosina Date: Tue Sep 25 14:38:55 2018 +0200 x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation commit 53c613fe6349994f023245519265999eed75957f upstream. STIBP is a feature provided by certain Intel ucodes / CPUs. This feature (once enabled) prevents cross-hyperthread control of decisions made by indirect branch predictors. Enable this feature if - the CPU is vulnerable to spectre v2 - the CPU supports SMT and has SMT siblings online - spectre_v2 mitigation autoselection is enabled (default) After some previous discussion, this leaves STIBP on all the time, as wrmsr on crossing kernel boundary is a no-no. This could perhaps later be a bit more optimized (like disabling it in NOHZ, experiment with disabling it in idle, etc) if needed. Note that the synchronization of the mask manipulation via newly added spec_ctrl_mutex is currently not strictly needed, as the only updater is already being serialized by cpu_add_remove_lock, but let's make this a little bit more future-proof. Signed-off-by: Jiri Kosina Signed-off-by: Thomas Gleixner Cc: Peter Zijlstra Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: "WoodhouseDavid" Cc: Andi Kleen Cc: Tim Chen Cc: "SchauflerCasey" Link: https://lkml.kernel.org/r/nycvar.YFH.7.76.1809251438240.15880@cbobk.fhfr.pm Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 822e5d5358bb945c5a22f7de50de307c8a782dbe Author: Jiri Kosina Date: Tue Sep 25 14:38:18 2018 +0200 x86/speculation: Apply IBPB more strictly to avoid cross-process data leak commit dbfe2953f63c640463c630746cd5d9de8b2f63ae upstream. Currently, IBPB is only issued in cases when switching into a non-dumpable process, the rationale being to protect such 'important and security sensitive' processess (such as GPG) from data leaking into a different userspace process via spectre v2. This is however completely insufficient to provide proper userspace-to-userpace spectrev2 protection, as any process can poison branch buffers before being scheduled out, and the newly scheduled process immediately becomes spectrev2 victim. In order to minimize the performance impact (for usecases that do require spectrev2 protection), issue the barrier only in cases when switching between processess where the victim can't be ptraced by the potential attacker (as in such cases, the attacker doesn't have to bother with branch buffers at all). [ tglx: Split up PTRACE_MODE_NOACCESS_CHK into PTRACE_MODE_SCHED and PTRACE_MODE_IBPB to be able to do ptrace() context tracking reasonably fine-grained ] Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in context switch") Originally-by: Tim Chen Signed-off-by: Jiri Kosina Signed-off-by: Thomas Gleixner Cc: Peter Zijlstra Cc: Josh Poimboeuf Cc: Andrea Arcangeli Cc: "WoodhouseDavid" Cc: Andi Kleen Cc: "SchauflerCasey" Link: https://lkml.kernel.org/r/nycvar.YFH.7.76.1809251437340.15880@cbobk.fhfr.pm Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 29d4af1f21524173ac3266c45638e764d3c0d077 Author: Salvatore Bonaccorso Date: Wed Aug 15 07:46:04 2018 +0200 Documentation/l1tf: Fix small spelling typo commit 60ca05c3b44566b70d64fbb8e87a6e0c67725468 upstream. Fix small typo (wiil -> will) in the "3.4. Nested virtual machines" section. Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry") Cc: linux-kernel@vger.kernel.org Cc: Jonathan Corbet Cc: Josh Poimboeuf Cc: Paolo Bonzini Cc: Greg Kroah-Hartman Cc: Tony Luck Cc: linux-doc@vger.kernel.org Cc: trivial@kernel.org Signed-off-by: Salvatore Bonaccorso Signed-off-by: Jonathan Corbet Signed-off-by: Thomas Gleixner [bwh: Backported to 4.9: adjust filename] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 1739ba8b00408396192ff476383e608ab5d33694 Author: Peter Zijlstra Date: Tue Aug 7 10:17:27 2018 -0700 x86/cpu: Sanitize FAM6_ATOM naming commit f2c4db1bd80720cd8cb2a5aa220d9bc9f374f04e upstream. Going primarily by: https://en.wikipedia.org/wiki/List_of_Intel_Atom_microprocessors with additional information gleaned from other related pages; notably: - Bonnell shrink was called Saltwell - Moorefield is the Merriefield refresh which makes it Airmont The general naming scheme is: FAM6_ATOM_UARCH_SOCTYPE for i in `git grep -l FAM6_ATOM` ; do sed -i -e 's/ATOM_PINEVIEW/ATOM_BONNELL/g' \ -e 's/ATOM_LINCROFT/ATOM_BONNELL_MID/' \ -e 's/ATOM_PENWELL/ATOM_SALTWELL_MID/g' \ -e 's/ATOM_CLOVERVIEW/ATOM_SALTWELL_TABLET/g' \ -e 's/ATOM_CEDARVIEW/ATOM_SALTWELL/g' \ -e 's/ATOM_SILVERMONT1/ATOM_SILVERMONT/g' \ -e 's/ATOM_SILVERMONT2/ATOM_SILVERMONT_X/g' \ -e 's/ATOM_MERRIFIELD/ATOM_SILVERMONT_MID/g' \ -e 's/ATOM_MOOREFIELD/ATOM_AIRMONT_MID/g' \ -e 's/ATOM_DENVERTON/ATOM_GOLDMONT_X/g' \ -e 's/ATOM_GEMINI_LAKE/ATOM_GOLDMONT_PLUS/g' ${i} done Signed-off-by: Peter Zijlstra (Intel) Cc: Alexander Shishkin Cc: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Stephane Eranian Cc: Thomas Gleixner Cc: Vince Weaver Cc: dave.hansen@linux.intel.com Cc: len.brown@intel.com Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner [bwh: Backported to 4.9: - Drop changes to CPU IDs that weren't already included - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 26d422c046c3f8620642ad9fde8aa36867b820c6 Author: Jiang Biao Date: Wed Jul 18 08:03:14 2018 +0800 x86/speculation: Remove SPECTRE_V2_IBRS in enum spectre_v2_mitigation commit d9f4426c73002957be5dd39936f44a09498f7560 upstream. SPECTRE_V2_IBRS in enum spectre_v2_mitigation is never used. Remove it. Signed-off-by: Jiang Biao Signed-off-by: Thomas Gleixner Cc: hpa@zytor.com Cc: dwmw2@amazon.co.uk Cc: konrad.wilk@oracle.com Cc: bp@suse.de Cc: zhong.weidong@zte.com.cn Link: https://lkml.kernel.org/r/1531872194-39207-1-git-send-email-jiang.biao2@zte.com.cn [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit b995196b9da4e2486d50e132539c848a60ea88da Author: Will Deacon Date: Tue Jun 19 13:53:08 2018 +0100 locking/atomics, asm-generic: Move some macros from to a new file commit 8bd9cb51daac89337295b6f037b0486911e1b408 upstream. In preparation for implementing the asm-generic atomic bitops in terms of atomic_long_*(), we need to prevent implementations from pulling in . A common reason for this include is for the BITS_PER_BYTE definition, so move this and some other BIT() and masking macros into a new header file, . Signed-off-by: Will Deacon Acked-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: linux-arm-kernel@lists.infradead.org Cc: yamada.masahiro@socionext.com Link: https://lore.kernel.org/lkml/1529412794-17720-4-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit ef0efbb7a99a1e51a0391f3fc51f7a3d505c179e Author: Dominik Brodowski Date: Tue May 22 11:05:39 2018 +0200 x86/speculation: Simplify the CPU bug detection logic commit 8ecc4979b1bd9c94168e6fc92960033b7a951336 upstream. Only CPUs which speculate can speculate. Therefore, it seems prudent to test for cpu_no_speculation first and only then determine whether a specific speculating CPU is susceptible to store bypass speculation. This is underlined by all CPUs currently listed in cpu_no_speculation were present in cpu_no_spec_store_bypass as well. Signed-off-by: Dominik Brodowski Signed-off-by: Thomas Gleixner Cc: bp@suse.de Cc: konrad.wilk@oracle.com Link: https://lkml.kernel.org/r/20180522090539.GA24668@light.dominikbrodowski.net Signed-off-by: Thomas Gleixner Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit c6693781ddaf21dd3746bd74ba0c66e013782b06 Author: Matthias Kaehlcke Date: Fri Sep 8 16:14:33 2017 -0700 bitops: avoid integer overflow in GENMASK(_ULL) commit c32ee3d9abd284b4fcaacc250b101f93829c7bae upstream. GENMASK(_ULL) performs a left-shift of ~0UL(L), which technically results in an integer overflow. clang raises a warning if the overflow occurs in a preprocessor expression. Clear the low-order bits through a substraction instead of the left-shift to avoid the overflow. (akpm: no change in .text size in my testing) Link: http://lkml.kernel.org/r/20170803212020.24939-1-mka@chromium.org Signed-off-by: Matthias Kaehlcke Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 08e501b5ff9f67f592b33d5e76e28d539a490041 Author: Nadav Amit Date: Sun Sep 2 11:14:50 2018 -0700 x86/mm: Use WRITE_ONCE() when setting PTEs commit 9bc4f28af75a91aea0ae383f50b0a430c4509303 upstream. When page-table entries are set, the compiler might optimize their assignment by using multiple instructions to set the PTE. This might turn into a security hazard if the user somehow manages to use the interim PTE. L1TF does not make our lives easier, making even an interim non-present PTE a security hazard. Using WRITE_ONCE() to set PTEs and friends should prevent this potential security hazard. I skimmed the differences in the binary with and without this patch. The differences are (obviously) greater when CONFIG_PARAVIRT=n as more code optimizations are possible. For better and worse, the impact on the binary with this patch is pretty small. Skimming the code did not cause anything to jump out as a security hazard, but it seems that at least move_soft_dirty_pte() caused set_pte_at() to use multiple writes. Signed-off-by: Nadav Amit Signed-off-by: Thomas Gleixner Acked-by: Peter Zijlstra (Intel) Cc: Dave Hansen Cc: Andi Kleen Cc: Josh Poimboeuf Cc: Michal Hocko Cc: Vlastimil Babka Cc: Sean Christopherson Cc: Andy Lutomirski Link: https://lkml.kernel.org/r/20180902181451.80520-1-namit@vmware.com [bwh: Backported to 4.9: - Drop changes in pmdp_establish(), native_set_p4d(), pudp_set_access_flags() - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit e160f1dea94efbad525d169df5379402c8c5ad05 Author: Filippo Sironi Date: Tue Jul 31 17:29:30 2018 +0200 x86/microcode: Update the new microcode revision unconditionally commit 8da38ebaad23fe1b0c4a205438676f6356607cfc upstream. Handle the case where microcode gets loaded on the BSP's hyperthread sibling first and the boot_cpu_data's microcode revision doesn't get updated because of early exit due to the siblings sharing a microcode engine. For that, simply write the updated revision on all CPUs unconditionally. Signed-off-by: Filippo Sironi Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Cc: prarit@redhat.com Link: http://lkml.kernel.org/r/1533050970-14385-1-git-send-email-sironi@amazon.de [bwh: Backported to 4.9: - Keep returning 0 on success - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 9e99161b71447331007b1bef13be067353e9ff2b Author: Prarit Bhargava Date: Tue Jul 31 07:27:39 2018 -0400 x86/microcode: Make sure boot_cpu_data.microcode is up-to-date commit 370a132bb2227ff76278f98370e0e701d86ff752 upstream. When preparing an MCE record for logging, boot_cpu_data.microcode is used to read out the microcode revision on the box. However, on systems where late microcode update has happened, the microcode revision output in a MCE log record is wrong because boot_cpu_data.microcode is not updated when the microcode gets updated. But, the microcode revision saved in boot_cpu_data's microcode member should be kept up-to-date, regardless, for consistency. Make it so. Fixes: fa94d0c6e0f3 ("x86/MCE: Save microcode revision in machine check records") Signed-off-by: Prarit Bhargava Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Cc: Tony Luck Cc: sironi@amazon.de Link: http://lkml.kernel.org/r/20180731112739.32338-1-prarit@redhat.com [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 97d70759908b6a94f3679ce38f2ec3e4da9f3a22 Author: Ashok Raj Date: Wed Feb 28 11:28:41 2018 +0100 x86/microcode/intel: Check microcode revision before updating sibling threads commit c182d2b7d0ca48e0d6ff16f7d883161238c447ed upstream. After updating microcode on one of the threads of a core, the other thread sibling automatically gets the update since the microcode resources on a hyperthreaded core are shared between the two threads. Check the microcode revision on the CPU before performing a microcode update and thus save us the WRMSR 0x79 because it is a particularly expensive operation. [ Borislav: Massage changelog and coding style. ] Signed-off-by: Ashok Raj Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Tested-by: Tom Lendacky Tested-by: Ashok Raj Cc: Arjan Van De Ven Link: http://lkml.kernel.org/r/1519352533-15992-2-git-send-email-ashok.raj@intel.com Link: https://lkml.kernel.org/r/20180228102846.13447-3-bp@alien8.de [bwh: Backported to 4.9: return 0 in this case] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 2678bc5cef4007463d75ff4ddc35b86aa3fb04ed Author: Borislav Petkov Date: Mon Jan 9 12:41:45 2017 +0100 x86/microcode/intel: Add a helper which gives the microcode revision commit 4167709bbf826512a52ebd6aafda2be104adaec9 upstream. Since on Intel we're required to do CPUID(1) first, before reading the microcode revision MSR, let's add a special helper which does the required steps so that we don't forget to do them next time, when we want to read the microcode revision. Signed-off-by: Borislav Petkov Link: http://lkml.kernel.org/r/20170109114147.5082-4-bp@alien8.de Signed-off-by: Thomas Gleixner [bwh: Backported to 4.9: - Keep using sync_core(), which will alway includes the necessary CPUID - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit a7501dca303c95e15a1ecc8830729b264081ca1a Author: Tom Lendacky Date: Mon Jul 2 16:36:02 2018 -0500 x86/bugs: Fix the AMD SSBD usage of the SPEC_CTRL MSR commit 612bc3b3d4be749f73a513a17d9b3ee1330d3487 upstream. On AMD, the presence of the MSR_SPEC_CTRL feature does not imply that the SSBD mitigation support should use the SPEC_CTRL MSR. Other features could have caused the MSR_SPEC_CTRL feature to be set, while a different SSBD mitigation option is in place. Update the SSBD support to check for the actual SSBD features that will use the SPEC_CTRL MSR. Signed-off-by: Tom Lendacky Cc: Borislav Petkov Cc: David Woodhouse Cc: Konrad Rzeszutek Wilk Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Fixes: 6ac2f49edb1e ("x86/bugs: Add AMD's SPEC_CTRL MSR usage") Link: http://lkml.kernel.org/r/20180702213602.29202.33151.stgit@tlendack-t1.amdoffice.net Signed-off-by: Ingo Molnar Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit c2185a44e742c82a6975ff1c96f8e95053658ca8 Author: Konrad Rzeszutek Wilk Date: Fri Jun 1 10:59:21 2018 -0400 x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features commit 108fab4b5c8f12064ef86e02cb0459992affb30f upstream. Both AMD and Intel can have SPEC_CTRL_MSR for SSBD. However AMD also has two more other ways of doing it - which are !SPEC_CTRL MSR ways. Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Thomas Gleixner Cc: Kees Cook Cc: kvm@vger.kernel.org Cc: KarimAllah Ahmed Cc: andrew.cooper3@citrix.com Cc: "H. Peter Anvin" Cc: Borislav Petkov Cc: David Woodhouse Link: https://lkml.kernel.org/r/20180601145921.9500-4-konrad.wilk@oracle.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 9ad055877c939e4d8661ede9e2aa1cc3691cef89 Author: Konrad Rzeszutek Wilk Date: Fri Jun 1 10:59:20 2018 -0400 x86/bugs: Add AMD's SPEC_CTRL MSR usage commit 6ac2f49edb1ef5446089c7c660017732886d62d6 upstream. The AMD document outlining the SSBD handling 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf mentions that if CPUID 8000_0008.EBX[24] is set we should be using the SPEC_CTRL MSR (0x48) over the VIRT SPEC_CTRL MSR (0xC001_011f) for speculative store bypass disable. This in effect means we should clear the X86_FEATURE_VIRT_SSBD flag so that we would prefer the SPEC_CTRL MSR. See the document titled: 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf A copy of this document is available at https://bugzilla.kernel.org/show_bug.cgi?id=199889 Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Thomas Gleixner Cc: Tom Lendacky Cc: Janakarajan Natarajan Cc: kvm@vger.kernel.org Cc: KarimAllah Ahmed Cc: andrew.cooper3@citrix.com Cc: Joerg Roedel Cc: Radim Krčmář Cc: Andy Lutomirski Cc: "H. Peter Anvin" Cc: Paolo Bonzini Cc: Borislav Petkov Cc: David Woodhouse Cc: Kees Cook Link: https://lkml.kernel.org/r/20180601145921.9500-3-konrad.wilk@oracle.com [bwh: Backported to 4.9: - Update feature test in guest_cpuid_has_spec_ctrl() instead of svm_{get,set}_msr() - Adjust context, indentation] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 98ccdae863f37056ad43681e3d2410790447973b Author: Konrad Rzeszutek Wilk Date: Fri Jun 1 10:59:19 2018 -0400 x86/bugs: Add AMD's variant of SSB_NO commit 24809860012e0130fbafe536709e08a22b3e959e upstream. The AMD document outlining the SSBD handling 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf mentions that the CPUID 8000_0008.EBX[26] will mean that the speculative store bypass disable is no longer needed. A copy of this document is available at: https://bugzilla.kernel.org/show_bug.cgi?id=199889 Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Thomas Gleixner Cc: Tom Lendacky Cc: Janakarajan Natarajan Cc: kvm@vger.kernel.org Cc: andrew.cooper3@citrix.com Cc: Andy Lutomirski Cc: "H. Peter Anvin" Cc: Borislav Petkov Cc: David Woodhouse Link: https://lkml.kernel.org/r/20180601145921.9500-2-konrad.wilk@oracle.com [bwh: Backported to 4.9: adjust context, indentation] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 7a473303c9e176ccb2e5025a69797944f7d355a5 Author: Ben Hutchings Date: Wed Nov 7 17:09:42 2018 +0000 x86/cpufeatures: Hide AMD-specific speculation flags Hide the AMD_{IBRS,IBPB,STIBP} flag from /proc/cpuinfo. This was done upstream as part of commit e7c587da1252 "x86/speculation: Use synthetic bits for IBRS/IBPB/STIBP". That commit has already been backported but this part was omitted. Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 125a6a65b9feb47a561a7ee98bf8ba91d82e6e2e Author: Tony Luck Date: Tue Mar 6 15:21:41 2018 +0100 x86/MCE: Save microcode revision in machine check records commit fa94d0c6e0f3431523f5701084d799c77c7d4a4f upstream. Updating microcode used to be relatively rare. Now that it has become more common we should save the microcode version in a machine check record to make sure that those people looking at the error have this important information bundled with the rest of the logged information. [ Borislav: Simplify a bit. ] Signed-off-by: Tony Luck Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Cc: Yazen Ghannam Cc: linux-edac Link: http://lkml.kernel.org/r/20180301233449.24311-1-tony.luck@intel.com [bwh: Backported to 4.9: - Also add ppin field to struct mce, to match upstream UAPI - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman