Fast FMAF is not a sufficient condition to enable denormals.
Before VI, enabling denormals caused F32 instructions to
run at F64 speeds.
llvm-svn: 339278
The way address space declarations for builtins currently work
is nearly useless. The code assumes the address spaces used for
builtins is a confusingly named "target address space" from user
code using __attribute__((address_space(N))) that matches
the builtin declaration. There's no way to use this to declare
a builtin that returns a language specific address space.
The terminology used is highly cofusing since it has nothing
to do with the the address space selected by the target to use
for a language address space.
This feature is essentially unused as-is. AMDGPU and NVPTX
are the only in-tree targets attempting to use this. The AMDGPU
builtins certainly do not behave as intended (i.e. all of the
builtins returning pointers can never compile because the numbered
address space never matches the expected named address space).
The NVPTX builtins are missing tests for some, and the others
seem to rely on an implicit addrspacecast.
Change the used address space for builtins based on a target
hook to allow using a language address space for a builtin.
This allows the same builtin declaration to be used for multiple
languages with similarly purposed address spaces (e.g. the same
AMDGPU builtin can be used in OpenCL and CUDA even though the
constant address spaces are arbitarily different).
This breaks the possibility of using arbitrary numbered
address spaces alongside the named address spaces for builtins.
If this is an issue we probably need to introduce another builtin
declaration character to distinguish language address spaces from
so-called "target address spaces".
llvm-svn: 338707
This adds tests for Armv8.4-A, and also some v8.2 and v8.3 tests that were
missing.
Differential Revision: https://reviews.llvm.org/D50068
llvm-svn: 338525
Summary: Microsoft's C++ object model for ARM64 is the same as that for X86_64.
For example, small structs with non-trivial copy constructors or virtual
function tables are passed indirectly. Currently, they are passed in registers
when compiled with clang.
Reviewers: rnk, mstorsjo, TomTan, haripul, javed.absar
Reviewed By: rnk, mstorsjo
Subscribers: kristof.beyls, chrib, llvm-commits, cfe-commits
Differential Revision: https://reviews.llvm.org/D49770
llvm-svn: 338076
Changing it to unsigned long (which is 32-bit on wasm32) makes it the same
type as wasm64 (where unsigned long is 64-bit), which would eliminate the most
common cause for mangled names being different between wasm32 and wasm64. For
example, export lists containing symbol names could now often be the same
between wasm32 and wasm64.
Differential Revision: https://reviews.llvm.org/D40526
llvm-svn: 337783
As documented here: https://software.intel.com/en-us/node/682969 and
https://software.intel.com/en-us/node/523346. cpu_dispatch multiversioning
is an ICC feature that provides for function multiversioning.
This feature is implemented with two attributes: First, cpu_specific,
which specifies the individual function versions. Second, cpu_dispatch,
which specifies the location of the resolver function and the list of
resolvable functions.
This is valuable since it provides a mechanism where the resolver's TU
can be specified in one location, and the individual implementions
each in their own translation units.
The goal of this patch is to be source-compatible with ICC, so this
implementation diverges from the ICC implementation in a few ways:
1- Linux x86/64 only: This implementation uses ifuncs in order to
properly dispatch functions. This is is a valuable performance benefit
over the ICC implementation. A future patch will be provided to enable
this feature on Windows, but it will obviously more closely fit ICC's
implementation.
2- CPU Identification functions: ICC uses a set of custom functions to identify
the feature list of the host processor. This patch uses the cpu_supports
functionality in order to better align with 'target' multiversioning.
1- cpu_dispatch function def/decl: ICC's cpu_dispatch requires that the function
marked cpu_dispatch be an empty definition. This patch supports that as well,
however declarations are also permitted, since the linker will solve the
issue of multiple emissions.
Differential Revision: https://reviews.llvm.org/D47474
llvm-svn: 337552
The SPIR target currently allows for half precision floating point types to be
emitted using the LLVM intrinsic functions which convert half types to floats
and doubles. However, this is illegal in SPIR as the only intrinsic allowed by
SPIR is memcpy, as per section 3 of the SPIR specification. Currently this is
leading to an assert being hit in the Clang CodeGen when attempting to emit a
constant or literal _Float16 type in a comparison operation on a SPIR or SPIR64
target. This assert stems from the CodeGen attempting to emit a constant half
value as an integer because the backend has specified that it is using these
half conversion intrinsics (which represents half as i16). This patch prevents
SPIR targets from using these intrinsics by overloading the responsible target
info method, marks SPIR targets as having a legal half type and provides
additional regression testing for the _Float16 type on SPIR targets.
Patch by: Stephen McGroarty
Differential Revision: https://reviews.llvm.org/D48188
llvm-svn: 335111
Diasble the use of the type __float128 for PPC machines older
than Power9.
The use of -mfloat128 for PPC machine older than Power9 will result
in an error.
Differential Revision: https://reviews.llvm.org/D48088
llvm-svn: 334613
Adding __attribute__((aligned(32))) to __m256 breaks the implementation
of _mm256_loadu_ps on Windows. On Windows, alignment attributes have
higher precedence than packing attributes.
We also might want to carefully consider the consequences of changing
our vector typedefs, since many users copy them and invent their own
new, non-Intel specific vector type names.
llvm-svn: 333958
This fixes two major problems:
- We were not capping vector alignment as desired on 32-bit ARM.
- We were using different alignments based on the AVX settings on
Intel, so we did not have a consistent ABI.
This is an ABI break, but we think we can get away with it because
vectors tend to be used mostly in inline code (which is why not having
a consistent ABI has not proven disastrous on Intel).
Intel's AVX types are specified as having 32-byte / 64-byte alignment,
so align them explicitly instead of relying on the base ABI rule.
Note that this sort of attribute is stripped from template arguments
in template substitution, so there's a possibility that code templated
over vectors will produce inadequately-aligned objects. The right
long-term solution for this is for alignment attributes to be
interpreted as true qualifiers and thus preserved in the canonical type.
llvm-svn: 333791
An intrinsic for an old instruction, as described in the Intel SDM.
Reviewers: craig.topper, rnk
Reviewed By: craig.topper, rnk
Differential Revision: https://reviews.llvm.org/D47142
llvm-svn: 333256
in gcc by https://gcc.gnu.org/ml/gcc-cvs/2018-04/msg00534.html.
The -mibt feature flag is being removed, and the -fcf-protection
option now also defines a CET macro and causes errors when used
on non-X86 targets, while X86 targets no longer check for -mibt
and -mshstk to determine if -fcf-protection is supported. -mshstk
is now used only to determine availability of shadow stack intrinsics.
Comes with an LLVM patch (D46882).
Patch by mike.dvoretsky
Differential Revision: https://reviews.llvm.org/D46881
llvm-svn: 332704
When looking at lib/Basic/Targets/OSTargets.h, I noticed that _REENTRANT is defined
unconditionally on Solaris, unlike all other targets and what either Studio cc (only define
it with -mt) or gcc (only define it with -pthread) do.
This patch follows that lead.
Differential Revision: https://reviews.llvm.org/D41241
llvm-svn: 332343
The option enables use of 32-bit pointers for accessing
const/local/shared memory. The feature is disabled by default.
Differential Revision: https://reviews.llvm.org/D46148
llvm-svn: 331938
This is similar to the LLVM change https://reviews.llvm.org/D46290.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
Differential Revision: https://reviews.llvm.org/D46320
llvm-svn: 331834
Summary:
The getConstraintRegister method is used by semantic checking of
inline assembly statements in order to diagnose conflicts between
clobber list and input/output lists. Currently ARM and AArch64 don't
override getConstraintRegister, so conflicts between registers
assigned to variables in asm labels and clobber lists are not
diagnosed. Such conflicts can cause assertion failures in the back end
and even miscompilations.
This patch implements getConstraintRegister for ARM and AArch64
targets. Since these targets don't have single-register constraints,
the implementation is trivial and just returns the register specified
in an asm label (if any).
Reviewers: eli.friedman, javed.absar, thopre
Reviewed By: thopre
Subscribers: rengolin, eraman, rogfer01, myatsina, kristof.beyls, cfe-commits, chrib
Differential Revision: https://reviews.llvm.org/D45965
llvm-svn: 331164
This adds a pre-defined macro to test if the compiler has support for the
v8.2-A dot rpoduct intrinsics in AArch32 mode.
The AAcrh64 equivalent has already been added by rL330229.
The ACLE spec which describes this macro hasn't been published yet, but this is
based on the final internal draft, and GCC has already implemented this.
Differential revision: https://reviews.llvm.org/D46108
llvm-svn: 331038
When rebasing https://reviews.llvm.org/D40898 with GCC 5.4 on Solaris 11.4, I ran
into a few instances of
In file included from /vol/llvm/src/compiler-rt/local/test/asan/TestCases/Posix/asan-symbolize-sanity-test.cc:19:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/string:40:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/bits/char_traits.h:39:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/bits/stl_algobase.h:64:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/bits/stl_pair.h:59:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/bits/move.h:57:
/usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/type_traits:311:39: error: __float128 is not supported on this target
struct __is_floating_point_helper<__float128>
^
during make check-all. The line above is inside
#if !defined(__STRICT_ANSI__) && defined(_GLIBCXX_USE_FLOAT128)
template<>
struct __is_floating_point_helper<__float128>
: public true_type { };
#endif
While the libstdc++ header indicates support for __float128, clang does not, but
should. The following patch implements this and fixed those errors.
Differential Revision: https://reviews.llvm.org/D41240
llvm-svn: 330572
Currently, the interaction between the triple, the CPU, and the
supported features is a mess: the driver edits the triple to indicate
the supported architecture version, and the LLVM backend uses this to
figure out what instructions are legal. This makes it difficult to
understand what's happening, and makes it impossible to LTO together two
modules with different computed architectures.
Instead of relying on triple rewriting to get the correct target
features, we should add the right target features explicitly.
Differential Revision: https://reviews.llvm.org/D45240
llvm-svn: 330169
The WBNOINVD instruction writes back all modified
cache lines in the processor’s internal cache to main memory
but does not invalidate (flush) the internal caches.
Reviewers: craig.topper, zvi, ashlykov
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D43817
llvm-svn: 329848
When NVPTX TARGET_BUILTIN specifies sm_XX or ptxYY as required feature,
consider those features available if we're compiling for GPU >= sm_XX or have
enabled PTX version >= ptxYY.
Differential Revision: https://reviews.llvm.org/D45061
llvm-svn: 329829
Sometimes when people compile bpf programs with
"clang ... -target bpf ...", the kernel header
files may contain host arch inline assembly codes
as in the patch https://patchwork.kernel.org/patch/10119683/
by Arnaldo Carvaldo de Melo.
The current workaround in the above patch
is to guard the inline assembly with "#ifndef __BPF__"
marco. So when __BPF__ is defined, these macros will
have no use.
Such a method is not extensible. As a matter of fact,
most of these inline assembly codes will be thrown away
at the end of clang compilation.
So for bpf target, this patch accepts all asm register
names in clang AST stage. The name will be checked
again during llc code generation if the inline assembly
code is indeed for bpf programs.
With this patch, the above "#ifndef __BPF__" is not needed
any more in https://patchwork.kernel.org/patch/10119683/.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 329823
amdgcn targets only support HIP, which does not define __CUDA_ARCH__.
this is a partial unroll of r329232 / D45277.
Differential Revision: https://reviews.llvm.org/D45387
llvm-svn: 329584
Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:
archtype
cas
classs
checkk
compres
definit
frome
iff
inteval
ith
lod
methode
nd
optin
ot
pres
statics
te
thru
Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)
Differential revision: https://reviews.llvm.org/D44188
llvm-svn: 329399
Summary:
This patch extend getTargetDefines and implement handleTargetFeatures
and hasFeature. and define corresponding marco for those features.
Reviewers: asb, apazos, eli.friedman
Differential Revision: https://reviews.llvm.org/D44727
Patch by Kito Cheng.
llvm-svn: 329278
Microsoft has reserved 'U' for the PreserveMostCC which is used in the
swift runtime. Add support for this. This allows the swift runtime to
be built for Windows again.
llvm-svn: 329025
Summary:
Allow rN registers to be simply parsed as correspoing xN registers.
The "register ... asm("rN")" is an command to the
compiler's register allocator, not an operand to any individual assembly
instruction. GCC documents this syntax as "...the name of the register
that should be used."
This is needed to support the changes in Linux kernel (see
https://lkml.org/lkml/2018/3/1/268 )
Note: This will add support only for the limited use case of
register ... asm("rN"). Any other uses that make rN leak into assembly
are not supported.
Reviewers: kristof.beyls, rengolin, peter.smith, t.p.northover
Reviewed By: peter.smith
Subscribers: javed.absar, eraman, cfe-commits, srhines
Differential Revision: https://reviews.llvm.org/D44815
llvm-svn: 328829
ObjC and ObjC++ pass non-trivial structs in a way that is incompatible
with each other. For example:
typedef struct {
id f0;
__weak id f1;
} S;
// this code is compiled in c++.
extern "C" {
void foo(S s);
}
void caller() {
// the caller passes the parameter indirectly and destructs it.
foo(S());
}
// this function is compiled in c.
// 'a' is passed directly and is destructed in the callee.
void foo(S a) {
}
This patch fixes the incompatibility by passing and returning structs
with __strong or weak fields using the C ABI in C++ mode. __strong and
__weak fields in a struct do not cause the struct to be destructed in
the caller and __strong fields do not cause the struct to be passed
indirectly.
Also, this patch fixes the microsoft ABI bug mentioned here:
https://reviews.llvm.org/D41039?id=128767#inline-364710
rdar://problem/38887866
Differential Revision: https://reviews.llvm.org/D44908
llvm-svn: 328731
Need to override convertConstraint to recognise amdgpu specific register names.
Differential Revision: https://reviews.llvm.org/D44533
llvm-svn: 328359
For generating NEON intrinsics, this determines the NEON data type, and whether
it should be a half type or an i16 type. I.e., we always pass a half type for
AArch64, this hasn't changed, but now also for ARM but only when FullFP16 is
enabled, and i16 otherwise.
This is intended to be non-functional change, but together with the backend
work in D44538 which adds support for f16 vectors, this enables adding the
AArch32 FP16 (vector) intrinsics.
Differential Revision: https://reviews.llvm.org/D44561
llvm-svn: 327836
This is a partial recommit of r327189 that was reverted
due to test issues. I.e., this recommits minimal functional
change, the FP16 feature test macros, and adds tests that
were missing in the original commit.
llvm-svn: 327455
- Expand GK_*s (i.e. GFX6 -> GFX600, GFX601, etc.)
- This allows us to choose features correctly in some cases (for example, fast fmaf is available on gfx600, but not gfx601)
- Move HasFMAF, HasFP64, HasLDEXPF to GPUInfo tables
- Add HasFastFMA, HasFastFMAF to GPUInfo tables
- Add missing tests
llvm-svn: 326254
Summary:
If the flag -fforce-enable-int128 is passed, it will enable support for __int128_t and __uint128_t types.
This flag can then be used to build compiler-rt for RISCV32.
Reviewers: asb, kito-cheng, apazos, efriedma
Reviewed By: asb, efriedma
Subscribers: shiva0217, efriedma, jfb, dschuff, sdardis, sbc100, jgravelle-google, aheejin, rbar, johnrusso, simoncook, jordy.potman.lists, sabuasal, niosHD, cfe-commits
Differential Revision: https://reviews.llvm.org/D43105
llvm-svn: 326045
LLVM has supported a new target feature "alu32" which could be enabled or
disabled by "-mattr=[+|-]alu32" when using llc.
This patch link Clang with it, so it could be also done by passing related
options to Clang, for example:
-Xclang -target-feature -Xclang +alu32
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Yonghong Song <yhs@fb.com>
llvm-svn: 325996
Cannon Lake does not support CLWB, therefore it
does not include all features listed under SKX.
Patch by Gabor Buella
Differential Revision: https://reviews.llvm.org/D43459
llvm-svn: 325655
This patch provides mitigation for CVE-2017-5715, Spectre variant two,
which affects the P5600 and P6600. It provides the option
-mindirect-jump=hazard, which instructs the LLVM backend to replace
indirect branches with their hazard barrier variants.
This option is accepted when targeting MIPS revision two or later.
The migitation strategy suggested by MIPS for these processors is to
use two hazard barrier instructions. 'jalr.hb' and 'jr.hb' are hazard
barrier variants of the 'jalr' and 'jr' instructions respectively.
These instructions impede the execution of instruction stream until
architecturally defined hazards (changes to the instruction stream,
privileged registers which may affect execution) are cleared. These
instructions in MIPS' designs are not speculated past.
These instructions are used with the option -mindirect-jump=hazard
when branching indirectly and for indirect function calls.
These instructions are defined by the MIPS32R2 ISA, so this mitigation
method is not compatible with processors which implement an earlier
revision of the MIPS ISA.
Implementation note: I've opted to provide this as an
-mindirect-jump={hazard,...} style option in case alternative
mitigation methods are required for other implementations of the MIPS
ISA in future, e.g. retpoline style solutions.
Reviewers: atanasyan
Differential Revision: https://reviews.llvm.org/D43487
llvm-svn: 325651
Summary:
Make clang accept `-msahf` (and `-mno-sahf`) flags to activate the
`+sahf` feature for the backend, for bug 36028 (Incorrect use of
pushf/popf enables/disables interrupts on amd64 kernels). This was
originally submitted in bug 36037 by Jonathan Looney
<jonlooney@gmail.com>.
As described there, GCC also uses `-msahf` for this feature, and the
backend already recognizes the `+sahf` feature. All that is needed is to
teach clang to pass this on to the backend.
The mapping of feature support onto CPUs may not be complete; rather, it
was chosen to match LLVM's idea of which CPUs support this feature (see
lib/Target/X86/X86.td).
I also updated the affected test case (CodeGen/attr-target-x86.c) to
match the emitted output.
Reviewers: craig.topper, coby, efriedma, rsmith
Reviewed By: craig.topper
Subscribers: emaste, cfe-commits
Differential Revision: https://reviews.llvm.org/D43394
llvm-svn: 325446
Apparently storing the pointer to a StringLiteral as
a StringRef caused this section of code to issue a ubsan
warning. This will hopefully fix that.
llvm-svn: 324687
What seems to be a bug in older versions of MSVC, constexpr
member arrays with a redefinition (to force emission) require
their initial definition to have the size between the brackets.
llvm-svn: 324682
When rejecting a march= or target-cpu command line parameter,
the message is quite lacking. This patch adds a note that prints
all possible values for the current target, if the target supports it.
This adds support for the ARM/AArch64 targets (more to come!).
Differential Revision: https://reviews.llvm.org/D42978
llvm-svn: 324673
Clang can use CUDA-9.1 now, though new APIs (are not implemented yet.
The major change is that headers in CUDA-9.1 went through substantial
changes that started in CUDA-9.0 which required substantial changes
in the cuda compatibility headers provided by clang.
There are two major issues:
* CUDA SDK no longer provides declarations for libdevice functions.
* A lot of device-side functions have become nvcc's builtins and
CUDA headers no longer contain their implementations.
This patch changes the way CUDA headers are handled if we compile
with CUDA 9.x. Both 9.0 and 9.1 are affected.
* Clang provides its own declarations of libdevice functions.
* For CUDA-9.x clang now provides implementation of device-side
'standard library' functions using libdevice.
This patch should not affect compilation with CUDA-8. There may be
some observable differences for CUDA-9.0, though they are not expected
to affect functionality.
Tested: CUDA test-suite tests for all supported combinations of:
CUDA: 7.0,7.5,8.0,9.0,9.1
GPU: sm_20, sm_35, sm_60, sm_70
Differential Revision: https://reviews.llvm.org/D42513
llvm-svn: 323713
This is to fix the bug reported in https://bugs.llvm.org/show_bug.cgi?id=34347#c6.
Currently, all MaxAtomicInlineWidth of x86-32 targets are set to 64. However,
i386 doesn't support any cmpxchg related instructions. i486 only supports cmpxchg.
So in this patch MaxAtomicInlineWidth is reset as follows:
For i386, the MaxAtomicInlineWidth should be 0 because no cmpxchg is supported.
For i486, the MaxAtomicInlineWidth should be 32 because it supports cmpxchg.
For others 32 bits x86 cpu, the MaxAtomicInlineWidth should be 64 because of cmpxchg8b.
Differential Revision: https://reviews.llvm.org/D42154
llvm-svn: 323281
Summary:
First, we need to explain the core of the vulnerability. Note that this
is a very incomplete description, please see the Project Zero blog post
for details:
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
The basis for branch target injection is to direct speculative execution
of the processor to some "gadget" of executable code by poisoning the
prediction of indirect branches with the address of that gadget. The
gadget in turn contains an operation that provides a side channel for
reading data. Most commonly, this will look like a load of secret data
followed by a branch on the loaded value and then a load of some
predictable cache line. The attacker then uses timing of the processors
cache to determine which direction the branch took *in the speculative
execution*, and in turn what one bit of the loaded value was. Due to the
nature of these timing side channels and the branch predictor on Intel
processors, this allows an attacker to leak data only accessible to
a privileged domain (like the kernel) back into an unprivileged domain.
The goal is simple: avoid generating code which contains an indirect
branch that could have its prediction poisoned by an attacker. In many
cases, the compiler can simply use directed conditional branches and
a small search tree. LLVM already has support for lowering switches in
this way and the first step of this patch is to disable jump-table
lowering of switches and introduce a pass to rewrite explicit indirectbr
sequences into a switch over integers.
However, there is no fully general alternative to indirect calls. We
introduce a new construct we call a "retpoline" to implement indirect
calls in a non-speculatable way. It can be thought of loosely as
a trampoline for indirect calls which uses the RET instruction on x86.
Further, we arrange for a specific call->ret sequence which ensures the
processor predicts the return to go to a controlled, known location. The
retpoline then "smashes" the return address pushed onto the stack by the
call with the desired target of the original indirect call. The result
is a predicted return to the next instruction after a call (which can be
used to trap speculative execution within an infinite loop) and an
actual indirect branch to an arbitrary address.
On 64-bit x86 ABIs, this is especially easily done in the compiler by
using a guaranteed scratch register to pass the target into this device.
For 32-bit ABIs there isn't a guaranteed scratch register and so several
different retpoline variants are introduced to use a scratch register if
one is available in the calling convention and to otherwise use direct
stack push/pop sequences to pass the target address.
This "retpoline" mitigation is fully described in the following blog
post: https://support.google.com/faqs/answer/7625886
We also support a target feature that disables emission of the retpoline
thunk by the compiler to allow for custom thunks if users want them.
These are particularly useful in environments like kernels that
routinely do hot-patching on boot and want to hot-patch their thunk to
different code sequences. They can write this custom thunk and use
`-mretpoline-external-thunk` *in addition* to `-mretpoline`. In this
case, on x86-64 thu thunk names must be:
```
__llvm_external_retpoline_r11
```
or on 32-bit:
```
__llvm_external_retpoline_eax
__llvm_external_retpoline_ecx
__llvm_external_retpoline_edx
__llvm_external_retpoline_push
```
And the target of the retpoline is passed in the named register, or in
the case of the `push` suffix on the top of the stack via a `pushl`
instruction.
There is one other important source of indirect branches in x86 ELF
binaries: the PLT. These patches also include support for LLD to
generate PLT entries that perform a retpoline-style indirection.
The only other indirect branches remaining that we are aware of are from
precompiled runtimes (such as crt0.o and similar). The ones we have
found are not really attackable, and so we have not focused on them
here, but eventually these runtimes should also be replicated for
retpoline-ed configurations for completeness.
For kernels or other freestanding or fully static executables, the
compiler switch `-mretpoline` is sufficient to fully mitigate this
particular attack. For dynamic executables, you must compile *all*
libraries with `-mretpoline` and additionally link the dynamic
executable and all shared libraries with LLD and pass `-z retpolineplt`
(or use similar functionality from some other linker). We strongly
recommend also using `-z now` as non-lazy binding allows the
retpoline-mitigated PLT to be substantially smaller.
When manually apply similar transformations to `-mretpoline` to the
Linux kernel we observed very small performance hits to applications
running typical workloads, and relatively minor hits (approximately 2%)
even for extremely syscall-heavy applications. This is largely due to
the small number of indirect branches that occur in performance
sensitive paths of the kernel.
When using these patches on statically linked applications, especially
C++ applications, you should expect to see a much more dramatic
performance hit. For microbenchmarks that are switch, indirect-, or
virtual-call heavy we have seen overheads ranging from 10% to 50%.
However, real-world workloads exhibit substantially lower performance
impact. Notably, techniques such as PGO and ThinLTO dramatically reduce
the impact of hot indirect calls (by speculatively promoting them to
direct calls) and allow optimized search trees to be used to lower
switches. If you need to deploy these techniques in C++ applications, we
*strongly* recommend that you ensure all hot call targets are statically
linked (avoiding PLT indirection) and use both PGO and ThinLTO. Well
tuned servers using all of these techniques saw 5% - 10% overhead from
the use of retpoline.
We will add detailed documentation covering these components in
subsequent patches, but wanted to make the core functionality available
as soon as possible. Happy for more code review, but we'd really like to
get these patches landed and backported ASAP for obvious reasons. We're
planning to backport this to both 6.0 and 5.0 release streams and get
a 5.0 release with just this cherry picked ASAP for distros and vendors.
This patch is the work of a number of people over the past month: Eric, Reid,
Rui, and myself. I'm mailing it out as a single commit due to the time
sensitive nature of landing this and the need to backport it. Huge thanks to
everyone who helped out here, and everyone at Intel who helped out in
discussions about how to craft this. Also, credit goes to Paul Turner (at
Google, but not an LLVM contributor) for much of the underlying retpoline
design.
Reviewers: echristo, rnk, ruiu, craig.topper, DavidKreitzer
Subscribers: sanjoy, emaste, mcrosier, mgorny, mehdi_amini, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D41723
llvm-svn: 323155