8548 CPU is GCC's name for the e500v2, so accept this in clang. The
e500v2 doesn't support lwsync, so define __NO_LWSYNC__ for this as well,
as GCC does.
Differential Revision: https://reviews.llvm.org/D67787
All SSE capable CPUs have MMX. 3dnow implicitly enables MMX.
We have code that detects if sse is enabled and implicitly enables
MMX unless -mno-mmx is passed. So in most cases we were already
enabling MMX if march passed a CPU that supported SSE.
The exception to this is if you pass -march for a cpu supports SSE
and also pass -mno-sse. We should still enable MMX since its part
of the CPU capability.
Summary:
Use a forward declaration of DataLayout instead of including
DataLayout.h in clangs TargetInfo.h. This reduces include
dependencies toward DataLayout.h (and other headers such as
DerivedTypes.h, Type.h that is included by DataLayout.h).
Needed to move implemantation of TargetInfo::resetDataLayout
from TargetInfo.h to TargetInfo.cpp.
Reviewers: rnk
Reviewed By: rnk
Subscribers: jvesely, nhaehnle, cfe-commits, llvm-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D69262
llvm-svn: 375438
The recently announced IBM z15 processor implements the architecture
already supported as "arch13" in LLVM. This patch adds support for
"z15" as an alternate architecture name for arch13.
Corrsponding LLVM support was committed as rev. 372435.
llvm-svn: 372436
RISC-V LLVM was only implement small/medlow code model, so it defined
__riscv_cmodel_medlow directly without check.
Now, we have medium/medany code model in RISC-V back-end, it should
define according the actually code model.
Reviewed By: lewis-revill
Differential Revision: https://reviews.llvm.org/D67065
llvm-svn: 372078
A number of inline assembly constraints are currently supported by LLVM, but rejected as invalid by Clang:
Target independent constraints:
s: An integer constant, but allowing only relocatable values
ARM specific constraints:
j: An immediate integer between 0 and 65535 (valid for MOVW)
x: A 32, 64, or 128-bit floating-point/SIMD register: s0-s15, d0-d7, or q0-q3
N: An immediate integer between 0 and 31 (Thumb1 only)
O: An immediate integer which is a multiple of 4 between -508 and 508. (Thumb1 only)
This patch adds support to Clang for the missing constraints along with some checks to ensure that the constraints are used with the correct target and Thumb mode, and that immediates are within valid ranges (at least where possible). The constraints are already implemented in LLVM, but just a couple of minor corrections to checks (V8M Baseline includes MOVW so should work with 'j', 'N' and 'O' shouldn't be valid in Thumb2) so that Clang and LLVM are in line with each other and the documentation.
Differential Revision: https://reviews.llvm.org/D65863
Change-Id: I18076619e319bac35fbb60f590c069145c9d9a0a
llvm-svn: 371079
Summary:
r337347 added support for the Signal Processing Engine (SPE) to LLVM.
This follows that up with the clang side.
This adds -mspe and -mno-spe, to match GCC.
Subscribers: nemanjai, kbarton, cfe-commits
Differential Revision: https://reviews.llvm.org/D49754
llvm-svn: 371066
Summary:
GCC seperates the `__riscv_float_abi_*` macros and the
`__riscv_abi_rve` macro. If the chosen abi is ilp32e, `gcc -march=rv32i
-mabi=ilp32i -E -dM` shows that both `__riscv_float_abi_soft` and
`__riscv_abi_rve` are set.
This patch corrects the compiler logic around these defines.
At the moment, this patch will not change clang's behaviour, because we do not
accept the `ilp32e` abi yet.
Reviewers: luismarques, asb
Reviewed By: luismarques
Subscribers: rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, pzheng, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D66591
llvm-svn: 370709
-Deprecate -mmpx and -mno-mpx command line options
-Remove CPUID detection of mpx for -march=native
-Remove MPX from all CPUs
-Remove MPX preprocessor define
I've left the "mpx" string in the backend so we don't fail on old IR, but its not connected to anything.
gcc has also deprecated these command line options. https://www.phoronix.com/scan.php?page=news_item&px=GCC-Patch-To-Drop-MPX
Differential Revision: https://reviews.llvm.org/D66669
llvm-svn: 370393
Summary: This ensures that libcalls aren't generated when the target supports atomics. Atomics aren't in the base RV32I/RV64I instruction sets, so MaxAtomicInlineWidth and MaxAtomicPromoteWidth are set only when the atomics extension is being targeted. This must be done in setMaxAtomicWidth, as this should be done after handleTargetFeatures has been called.
Reviewers: jfb, jyknight, wmi, asb
Reviewed By: asb
Subscribers: pzheng, MaskRay, s.egerton, lenary, dexonsmith, psnobl, benna, Jim, JohnLLVM, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, lewis-revill, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D57450
llvm-svn: 370073
Push LR register before calling __gnu_mcount_nc as it expects the value of LR register to be the top value of
the stack on ARM32.
Differential Revision: https://reviews.llvm.org/D65019
llvm-svn: 369147
This allows the constraint A to be used in inline asm for RISC-V, which
allows an address held in a register to be used.
This patch adds the minimal amount of code required to get operands with
the right constraints to compile.
Differential Revision: https://reviews.llvm.org/D54295
llvm-svn: 369093
Support -march=tigerlake for x86.
Compare with Icelake Client, It include 4 more new features ,they are
avx512vp2intersect, movdiri, movdir64b, shstk.
Patch by Xiang Zhang (xiangzhangllvm)
Differential Revision: https://reviews.llvm.org/D65840
llvm-svn: 368543
This patch adds the SVE built-in types defined by the Procedure Call
Standard for the Arm Architecture:
https://developer.arm.com/docs/100986/0000
It handles the types in all relevant places that deal with built-in types.
At the moment, some of these places bail out with an error, including:
(1) trying to generate LLVM IR for the types
(2) trying to generate debug info for the types
(3) trying to mangle the types using the Microsoft C++ ABI
(4) trying to @encode the types in Objective C
(1) and (2) are fixed by follow-on patches but (unlike this patch)
they deal mostly with target-specific LLVM details, so seemed like
a logically separate change. There is currently no spec for (3) and
(4), so reporting an error seems like the correct behaviour for now.
The intention is that the types will become sizeless types:
http://lists.llvm.org/pipermail/cfe-dev/2019-June/062523.html
The main purpose of the sizeless type extension is to diagnose
impossible or dangerous uses of the types, such as any that would
require sizeof to have a meaningful defined value.
Until then, the patch sets the alignments of the types to the values
specified in the link above. It also sets the sizes of the types to
zero, which is chosen to be consistently wrong and shouldn't affect
correctly-written code (i.e. code that would compile even with the
sizeless type extension).
The patch adds the common subset of functionality needed to test the
sizeless type extension on the one hand and to provide SVE intrinsic
functions on the other. After this patch, the two pieces of work are
essentially independent.
The patch is based on one by Graham Hunter:
https://reviews.llvm.org/D59245
Differential Revision: https://reviews.llvm.org/D62960
llvm-svn: 368413
Summary:
The maximum alignment used by ARM arch
is 64bits, not 128.
This could cause overaligned memory
access for 128 bit neon vector that
have unpredictable behaviour.
This fixes: https://bugs.llvm.org/show_bug.cgi?id=42668
Reviewers: ostannard, dmgreen, srhines, danalbert, pirama, peter.smith
Reviewed By: pirama, peter.smith
Subscribers: phosek, thegameg, thakis, llvm-commits, carwil, peter.smith, javed.absar, kristof.beyls, cfe-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D65000
llvm-svn: 368288
Summary:
This adds the 'f' inline assembly constraint, as supported by GCC. An
'f'-constrained operand is passed in a floating point register. Exactly
which kind of floating-point register (32-bit or 64-bit) is decided
based on the operand type and the available standard extensions (-f and
-d, respectively).
This patch adds support in both the clang frontend, and LLVM itself.
Reviewers: asb, lewis-revill
Reviewed By: asb
Subscribers: hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D65500
llvm-svn: 367403
This adds support for parsing/emitting in IR the floating-point RISC-V
registers in inline assembly clobber lists.
Differential Revision: https://reviews.llvm.org/D64737
llvm-svn: 367399
make check-all currently fails on x86_64-pc-solaris2.11 when building with GCC 9:
Undefined first referenced
symbol in file
_ZN11__sanitizer14internal_lseekEimi SANITIZER_TEST_OBJECTS.sanitizer_libc_test.cc.i386.o
_ZN11__sanitizer23MapWritableFileToMemoryEPvmim SANITIZER_TEST_OBJECTS.sanitizer_libc_test.cc.i386.o
ld: fatal: symbol referencing errors
clang-9: error: linker command failed with exit code 1 (use -v to see invocation)
make[3]: *** [projects/compiler-rt/lib/sanitizer_common/tests/CMakeFiles/TSanitizer-i386-Test.dir/build.make:92: projects/compiler-rt/lib/sanitizer_common/tests/Sanitizer-i386-Test] Error 1
While e.g. __sanitizer::internal_lseek is defined in sanitizer_solaris.cc, g++ 9
predefines _FILE_OFFSET_BITS=64 while clang++ currently does not.
This patch resolves this inconsistency by following the gcc lead, which allows
make check-all to finish successfully.
There's one caveat: gcc defines _LARGEFILE_SOURCE and _LARGEFILE64_SOURCE for C++ only, while clang has long been doing it for
all languages. I'd like to keep it this way because those macros do is to make
declarations of fseek/ftello (_LARGEFILE_SOURCE) resp. the 64-bit versions
of largefile functions (*64 with _LARGEFILE64_SOURCE) visible additionally.
However, _FILE_OFFSET_BITS=64 changes all affected functions to be largefile-aware.
I'd like to restrict this to C++, just like gcc does.
To avoid a similar inconsistence with host compilers that don't predefine _FILE_OFFSET_BITS=64
(e.g. clang < 9, gcc < 9), this needs a compantion patch https://reviews.llvm.org/D64483.
Tested on x86_64-pc-solaris2.11.
Differential Revision: https://reviews.llvm.org/D64482
llvm-svn: 367305
The Arm C Language Extensions for SVE document specifies that
__ARM_FEATURE_SVE should be set when the compiler supports SVE and
implements all the extensions described in the document.
This is currently not yet the case, so the feature should be disabled
until the compiler can provide all the extensions as described.
Reviewers: c-rhodes, rengolin, rovka, ktkachov
Reviewed By: rengolin
Differential Revision: https://reviews.llvm.org/D65404
llvm-svn: 367301
The maximum alignment used by ARM arch
is 64bits, not 128.
This could cause overaligned memory
access for 128 bit neon vector that
have unpredictable behaviour.
This fixes: https://bugs.llvm.org/show_bug.cgi?id=42668
Patch by: Diogo Sampaio(diogo.sampaio@arm.com)
Differential Revision: https://reviews.llvm.org/D65000
Change-Id: I5a62b766491f15dd51e4cfe6625929db897f67e3
llvm-svn: 367119
This seems to be an old vestage of a previous implementation of getting
the default calling convention, and everything is now using
CXXABI/ASTContext's getDefaultCallingConvention. Remove it, since it
isn't doing anything.
llvm-svn: 367039
Adds the SVE vector and predicate registers to the list of known registers.
Patch by Kerry McLaughlin.
Reviewers: erichkeane, sdesmalen, rengolin
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D64739
llvm-svn: 366878
Clang :: Headers/max_align.c currently FAILs on 64-bit SPARC:
error: 'error' diagnostics seen but not expected:
File /vol/llvm/src/clang/dist/test/Headers/max_align.c Line 12: static_assert failed due to requirement '8 == _Alignof(max_align_t)' ""
1 error generated.
This happens because SuitableAlign isn't defined for SPARCv9 unlike SPARCv8
(which uses the default of 64 bits). gcc's sparc/sparc.h has
#define BIGGEST_ALIGNMENT (TARGET_ARCH64 ? 128 : 64)
This patch sets SuitableAlign to match and updates the corresponding testcase.
Tested on sparcv9-sun-solaris2.11.
Differential Revision: https://reviews.llvm.org/D64487
llvm-svn: 366820
The RISC-V hard float calling convention requires the frontend to:
* Detect cases where, once "flattened", a struct can be passed using
int+fp or fp+fp registers under the hard float ABI and coerce to the
appropriate type(s)
* Track usage of GPRs and FPRs in order to gate the above, and to
determine when signext/zeroext attributes must be added to integer
scalars
This patch attempts to do this in compliance with the documented ABI,
and uses ABIArgInfo::CoerceAndExpand in order to do this. @rjmccall, as
author of that code I've tagged you as reviewer for initial feedback on
my usage.
Note that a previous version of the ABI indicated that when passing an
int+fp struct using a GPR+FPR, the int would need to be sign or
zero-extended appropriately. GCC never did this and the ABI was changed,
which makes life easier as ABIArgInfo::CoerceAndExpand can't currently
handle sign/zero-extension attributes.
Re-landed after backing out 366450 due to missed hunks.
Differential Revision: https://reviews.llvm.org/D60456
llvm-svn: 366480
The RISC-V hard float calling convention requires the frontend to:
* Detect cases where, once "flattened", a struct can be passed using
int+fp or fp+fp registers under the hard float ABI and coerce to the
appropriate type(s) * Track usage of GPRs and FPRs in order to gate the
above, and to
determine when signext/zeroext attributes must be added to integer
scalars
This patch attempts to do this in compliance with the documented ABI,
and uses ABIArgInfo::CoerceAndExpand in order to do this. @rjmccall, as
author of that code I've tagged you as reviewer for initial feedback on
my usage.
Note that a previous version of the ABI indicated that when passing an
int+fp struct using a GPR+FPR, the int would need to be sign or
zero-extended appropriately. GCC never did this and the ABI was changed,
which makes life easier as ABIArgInfo::CoerceAndExpand can't currently
handle sign/zero-extension attributes.
Differential Revision: https://reviews.llvm.org/D60456
llvm-svn: 366450
The jcvt intrinsic defined in ACLE [1] is available when ARM_FEATURE_JCVT is defined.
This change introduces the AArch64 intrinsic, wires it up to the instruction and a new clang builtin function.
The __ARM_FEATURE_JCVT macro is now defined when an Armv8.3-A or higher target is used.
I've implemented the target detection logic in Clang so that this feature is enabled for architectures from armv8.3-a onwards (so -march=armv8.4-a also enables this, for example).
make check-all didn't show any new failures.
[1] https://developer.arm.com/docs/101028/latest/data-processing-intrinsics
Differential Revision: https://reviews.llvm.org/D64495
llvm-svn: 366197
gcc PowerPC supports 3 representations of long double:
* -mlong-double-64
long double has the same representation of double but is mangled as `e`.
In clang, this is the default on AIX, FreeBSD and Linux musl.
* -mlong-double-128
2 possible 128-bit floating point representations:
+ -mabi=ibmlongdouble
IBM extended double format. Mangled as `g`
In clang, this is the default on Linux glibc.
+ -mabi=ieeelongdouble
IEEE 754 quadruple-precision format. Mangled as `u9__ieee128` (`U10__float128` before gcc 8.2)
This is currently unavailable.
This patch adds -mabi=ibmlongdouble and -mabi=ieeelongdouble, and thus
makes the IEEE 754 quadruple-precision long double available for
languages supported by clang.
Reviewed By: hfinkel
Differential Revision: https://reviews.llvm.org/D64283
llvm-svn: 366044
This patch series adds support for the next-generation arch13
CPU architecture to the SystemZ backend.
This includes:
- Basic support for the new processor and its features.
- Support for low-level builtins mapped to new LLVM intrinsics.
- New high-level intrinsics in vecintrin.h.
- Indicate support by defining __VEC__ == 10303.
Note: No currently available Z system supports the arch13
architecture. Once new systems become available, the
official system name will be added as supported -march name.
llvm-svn: 365933
This patch makes the driver option -mlong-double-128 available for X86
and PowerPC. The CC1 option -mlong-double-128 is available on all targets
for users to test on unsupported targets.
On PowerPC, -mlong-double-128 uses the IBM extended double format
because we don't support -mabi=ieeelongdouble yet (D64283).
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D64277
llvm-svn: 365866
These macro definitions don't depend on the template parameter, so they
don't need to be part of the template. Move them to a .cpp file.
llvm-svn: 365556
In gcc PowerPC, long double has 3 mangling schemes:
-mlong-double-64: `e`
-mlong-double-128 -mabi=ibmlongdouble: `g`
-mlong-double-128 -mabi=ieeelongdouble: `u9__ieee128` (gcc <= 8.1: `U10__float128`)
The current useFloat128ManglingForLongDouble() bisection is not suitable
when we support -mlong-double-128 in clang (D64277). Replace
useFloat128ManglingForLongDouble() with getLongDoubleMangling() and
getFloat128Mangling() to allow 3 mangling schemes.
I also deleted the `getTriple().isOSBinFormatELF()` check (the Darwin
support has gone: https://reviews.llvm.org/D50988).
For x86, change the mangled code of __float128 from `U10__float128` to `g`. `U10__float128` was wrongly copied from PowerPC.
The test will be added to `test/CodeGen/x86-long-double.cpp` in D64277.
Reviewed By: erichkeane
Differential Revision: https://reviews.llvm.org/D64276
llvm-svn: 365480
-mlong-double-64 is supported on some ports of gcc (i386, x86_64, and ppc{32,64}).
On many other targets, there will be an error:
error: unrecognized command line option '-mlong-double-64'
This patch makes the driver option -mlong-double-64 available for x86
and ppc. The CC1 option -mlong-double-64 is available on all targets for
users to test on unsupported targets.
LongDoubleSize is added as a VALUE_LANGOPT so that the option can be
shared with -mlong-double-128 when we support it in clang.
Also, make powerpc*-linux-musl default to use 64-bit long double. It is
currently the only supported ABI on musl and is also how people
configure powerpc*-linux-musl-gcc.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D64067
llvm-svn: 365412
Summary:
"ww" and "ws" are both constraint codes for VSX vector registers that
hold scalar double data. "ww" is preferred for float while "ws" is
preferred for double.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D64119
llvm-svn: 365106
"To" selects an odd-numbered GPR, and "Te" an even one. There are some
8.1-M instructions that have one too few bits in their register fields
and require registers of particular parity, without necessarily using
a consecutive even/odd pair.
Also, the constraint letter "t" should select an MVE q-register, when
MVE is present. This didn't need any source changes, but some extra
tests have been added.
Reviewers: dmgreen, samparker, SjoerdMeijer
Subscribers: javed.absar, eraman, kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D60709
llvm-svn: 364331
error: 'error' diagnostics expected but not seen:
File /vol/llvm/src/clang/local/test/Sema/wchar.c Line 22: initializing wide char array with non-wide string literal
error: 'error' diagnostics seen but not expected:
File /vol/llvm/src/clang/local/test/Sema/wchar.c Line 20: array initializer must be an initializer list
File /vol/llvm/src/clang/local/test/Sema/wchar.c Line 22: array initializer must be an initializer list
It turns out the definition is wrong, as can be seen in GCC's gcc/config/sol2.h:
/* wchar_t is called differently in <wchar.h> for 32 and 64-bit
compilations. This is called for by SCD 2.4.1, p. 6-83, Figure 6-65
(32-bit) and p. 6P-10, Figure 6.38 (64-bit). */
#undef WCHAR_TYPE
#define WCHAR_TYPE (TARGET_64BIT ? "int" : "long int")
The following patch implements this, and at the same time corrects the wint_t
definition which is the same:
/* Same for wint_t. See SCD 2.4.1, p. 6-83, Figure 6-66 (32-bit). There's
no corresponding 64-bit definition, but this is what Solaris 8
<iso/wchar_iso.h> uses. */
#undef WINT_TYPE
#define WINT_TYPE (TARGET_64BIT ? "int" : "long int")
Clang :: Preprocessor/wchar_t.c and Clang :: Sema/format-strings.c need to
be adjusted to account for that.
Tested on i386-pc-solaris2.11, x86_64-pc-solaris2.11, and x86_64-pc-linux-gnu.
Differential Revision: https://reviews.llvm.org/D62944
llvm-svn: 363612
ARM has a special target feature called soft-float-abi. This feature is
special, since we get it passed to us explicitly in the frontend, but
filter it out before it can land in any target feature strings in LLVM
IR.
__attribute__((target(""))) doesn't quite filter these features out
properly, so today, we get warnings about soft-float-abi being an
unknown feature from the backend.
This CL has us filter soft-float-abi out at a slightly different point,
so we don't end up passing these invalid features to the backend.
Differential Revision: https://reviews.llvm.org/D61750
llvm-svn: 363346
This allows the constraints I, J & K to be used in inline asm for
RISC-V, with the following semantics (equivalent to GCC):
I: Any 12-bit signed immediate.
J: Integer zero only.
K: Any 5-bit unsigned immediate.
See the GCC definitions here:
https://gcc.gnu.org/onlinedocs/gccint/Machine-Constraints.html
Differential Revision: https://reviews.llvm.org/D54091
llvm-svn: 363055
The recently added cooperlake CPU has made our already ugly switch statement even worse. There's a CPU exclusion list around the bf16 feature in the cooper lake block. I worry that we'll have to keep adding new CPUs to that until bf16 intercepts a client space CPU. We have several other exclusion lists in other parts of the switch due to skylakeserver, cascadelake, and cooperlake not having sgx. Another for cannonlake not having clwb but having all other features from skx.
This removes all these special ifs at the cost of some duplication of features and a goto. I've copied all of the skx features into either cannonlake or icelakeclient(for clwb). And pulled sklyakeserver, cascadelake, and cooperlake out of the main inheritance chain into their own chain. At the end of skylakeserver we merge back into the main chain at skylakeclient but below sgx. I think this is at least easier to follow.
Differential Revision: https://reviews.llvm.org/D63018
llvm-svn: 362965
If MVE is present at all, then the macro __ARM_FEATURE_MVE is defined
to a value which has bit 0 set for integer MVE, and bit 1 set for
floating-point MVE.
(Floating-point MVE implies integer MVE, so if this macro is defined
at all then it will be set to 1 or 3, never 2.)
Patch mostly by Simon Tatham
Differential Revision: https://reviews.llvm.org/D60710
llvm-svn: 362806
Change D60691 caused some knock-on failures that weren't caught by the
existing tests. Firstly, selecting a CPU that should have had a
restricted FPU (e.g. `-mcpu=cortex-m4`, which should have 16 d-regs
and no double precision) could give the unrestricted version, because
`ARM::getFPUFeatures` returned a list of features including subtracted
ones (here `-fp64`,`-d32`), but `ARMTargetInfo::initFeatureMap` threw
away all the ones that didn't start with `+`. Secondly, the
preprocessor macros didn't reliably match the actual compilation
settings: for example, `-mfpu=softvfp` could still set `__ARM_FP` as
if hardware FP was available, because the list of features on the cc1
command line would include things like `+vfp4`,`-vfp4d16` and clang
didn't realise that one of those cancelled out the other.
I've fixed both of these issues by rewriting `ARM::getFPUFeatures` so
that it returns a list that enables every FP-related feature
compatible with the selected FPU and disables every feature not
compatible, which is more verbose but means clang doesn't have to
understand the dependency relationships between the backend features.
Meanwhile, `ARMTargetInfo::handleTargetFeatures` is testing for all
the various forms of the FP feature names, so that it won't miss cases
where it should have set `HW_FP` to feed into feature test macros.
That in turn caused an ordering problem when handling `-mcpu=foo+bar`
together with `-mfpu=something_that_turns_off_bar`. To fix that, I've
arranged that the `+bar` suffixes on the end of `-mcpu` and `-march`
cause feature names to be put into a separate vector which is
concatenated after the output of `getFPUFeatures`.
Another side effect of all this is to fix a bug where `clang -target
armv8-eabi` by itself would fail to set `__ARM_FEATURE_FMA`, even
though `armv8` (aka Arm v8-A) implies FP-Armv8 which has FMA. That was
because `HW_FP` was being set to a value including only the `FPARMV8`
bit, but that feature test macro was testing only the `VFP4FPU` bit.
Now `HW_FP` ends up with all the bits set, so it gives the right
answer.
Changes to tests included in this patch:
* `arm-target-features.c`: I had to change basically all the expected
results. (The Cortex-M4 test in there should function as a
regression test for the accidental double-precision bug.)
* `arm-mfpu.c`, `armv8.1m.main.c`: switched to using `CHECK-DAG`
everywhere so that those tests are no longer sensitive to the order
of cc1 feature options on the command line.
* `arm-acle-6.5.c`: been updated to expect the right answer to that
FMA test.
* `Preprocessor/arm-target-features.c`: added a regression test for
the `mfpu=softvfp` issue.
Reviewers: SjoerdMeijer, dmgreen, ostannard, samparker, JamesNagurne
Reviewed By: ostannard
Subscribers: srhines, javed.absar, kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D62998
llvm-svn: 362791
Given the existing infrastructure in LLVM side for +fp and +fp.dp,
this is more or less trivial, needing only one tiny source change and
a couple of tests.
Patch by Simon Tatham.
Differential Revision: https://reviews.llvm.org/D60699
llvm-svn: 362096
Those two subtarget features were awkward because their semantics are
reversed: each one indicates the _lack_ of support for something in
the architecture, rather than the presence. As a consequence, you
don't get the behavior you want if you combine two sets of feature
bits.
Each SubtargetFeature for an FP architecture version now comes in four
versions, one for each combination of those options. So you can still
say (for example) '+vfp2' in a feature string and it will mean what
it's always meant, but there's a new string '+vfp2d16sp' meaning the
version without those extra options.
A lot of this change is just mechanically replacing positive checks
for the old features with negative checks for the new ones. But one
more interesting change is that I've rearranged getFPUFeatures() so
that the main FPU feature is appended to the output list *before*
rather than after the features derived from the Restriction field, so
that -fp64 and -d32 can override defaults added by the main feature.
Reviewers: dmgreen, samparker, SjoerdMeijer
Subscribers: srhines, javed.absar, eraman, kristof.beyls, hiraditya, zzheng, Petar.Avramovic, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D60691
llvm-svn: 361845
We had an if statement that checked over every avx512* feature to see if it should enabled avx512f. Since they are all prefixed with avx512 just check for that instead.
llvm-svn: 361557
Summary:
These features will both be implemented soon, so I thought I would
save time by adding the boilerplate for both of them at the same time.
Reviewers: aheejin
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D62047
llvm-svn: 361516
Summary:
For big-endian powerpc64, the default ABI is ELFv1. OpenPower ABI ELFv2 is supported when -mabi=elfv2 is specified. FreeBSD support for PowerPC64 ELFv2 ABI with LLVM is in progress[1]. This patch adds an alternative way to specify ELFv2 ABI on target triple [2].
The following results are expected:
ELFv1 when using:
-target powerpc64-unknown-freebsd12.0
-target powerpc64-unknown-freebsd12.0 -mabi=elfv1
-target powerpc64-unknown-freebsd12.0-elfv1
ELFv2 when using:
-target powerpc64-unknown-freebsd12.0 -mabi=elfv2
-target powerpc64-unknown-freebsd12.0-elfv2
[1] https://wiki.freebsd.org/powerpc/llvm-elfv2
[2] https://clang.llvm.org/docs/CrossCompilation.html
Patch by Alfredo Dal'Ava Júnior!
Differential Revision: https://reviews.llvm.org/D61950
llvm-svn: 361355
Defines macro ARM_FEATURE_CMSE to 1 for v8-M targets and introduces
-mcmse option which for v8-M targets sets ARM_FEATURE_CMSE to 3.
A diagnostic is produced when the option is given on architectures
without support for Security Extensions.
Reviewed By: dmgreen, snidertm
Differential Revision: https://reviews.llvm.org/D59879
llvm-svn: 361261
Previously we were doing this so that the 256 bit selectw builtin could be used in the implementation of the 512->256 bit conversion intrinsic.
After this commit we now use a masked convert builtin that will emit the intrinsic call and the 256-bit select from custom code in CGBuiltin. Then the header only needs to call that one intrinsic.
llvm-svn: 360924
Summary:
- This patch checks the AIX version and defines the appropriate macros.
- Follow up to a comment on D59048.
Author: andusy
Reviewers: hubert.reinterpretcast, jasonliu, sfertile, xingxue
Reviewed By: sfertile
Subscribers: jsji, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D61530
llvm-svn: 360900
Darwin if the version of libc++abi isn't new enough to include the fix
in r319123
This patch resurrects r264998, which was committed to work around a bug
in libc++abi that was causing _cxa_allocate_exception to return a memory
that wasn't double-word aligned.
http://lists.llvm.org/pipermail/cfe-commits/Week-of-Mon-20160328/154332.html
In addition, this patch makes clang issue a warning if the type of the
thrown object requires an alignment that is larger than the minimum
guaranteed by the target C++ runtime.
rdar://problem/49864414
Differential Revision: https://reviews.llvm.org/D61667
llvm-svn: 360404
Summary:
1. Enable infrastructure of AVX512_BF16, which is supported for BFLOAT16 in Cooper Lake;
2. Enable intrinsics for VCVTNE2PS2BF16, VCVTNEPS2BF16 and DPBF16PS instructions, which are Vector Neural Network Instructions supporting BFLOAT16 inputs and conversion instructions from IEEE single precision.
For more details about BF16 intrinsic, please refer to the latest ISE document: https://software.intel.com/en-us/download/intel-architecture-instruction-set-extensions-programming-reference
Patch by LiuTianle
Reviewers: craig.topper, smaslov, LuoYuanke, wxiao3, annita.zhang, spatel, RKSimon
Reviewed By: craig.topper
Subscribers: mgorny, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D60552
llvm-svn: 360018
According to alignment section in below ARM64 ABI document, MSVC could increase
alignment of global data based on its total size. Clang doesn't do this. Compile
the same symbol into different alignments by Clang and MSVC could cause link
error because some instruction encodings, like 64-bit LDR/STR with immediate,
require the target to be 8 bytes aligned, and linker could choose code stream
with such LDR/STR instruction from MSVC and 4 bytes aligned data from Clang into
final image, which actually cannot be linked together
(see https://bugs.llvm.org/show_bug.cgi?id=41506 for more details).
https://docs.microsoft.com/en-us/cpp/build/arm64-windows-abi-conventions?view=vs-2019#alignment
Differential Revision: https://reviews.llvm.org/D61225
llvm-svn: 359744
This provides intrinsics support for Memory Tagging Extension (MTE),
which was introduced with the Armv8.5-a architecture.
These intrinsics are available when __ARM_FEATURE_MEMORY_TAGGING is defined.
Each intrinsic is described in detail in the ACLE Q1 2019 documentation:
https://developer.arm.com/docs/101028/latest
Reviewed By: Tim Nortover, David Spickett
Differential Revision: https://reviews.llvm.org/D60485
llvm-svn: 359348
"DefineStd(Builder, "bpf", Opts)" generates the following three
macros:
bpf
__bpf
__bpf__
and the macro "bpf" is due to the fact that the target language
is C which allows GNU extensions.
The name "bpf" could be easily used as variable name or type
field name. For example, in current linux kernel, there are
four places where bpf is used as a field name. If the corresponding
types are included in bpf program, the compilation error will
occur.
This patch removed predefined macro "bpf" as well as "__bpf" which
is rarely used if used at all.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D61173
llvm-svn: 359310
These builtins provide access to the new integer and
sub-integer variants of MMA (matrix multiply-accumulate) instructions
provided by CUDA-10.x on sm_75 (AKA Turing) GPUs.
Also added a feature for PTX 6.4. While Clang/LLVM does not generate
any PTX instructions that need it, we still need to pass it through to
ptxas in order to be able to compile code that uses the new 'mma'
instruction as inline assembly (e.g used by NVIDIA's CUTLASS library
https://github.com/NVIDIA/cutlass/blob/master/cutlass/arch/mma.h#L101)
Differential Revision: https://reviews.llvm.org/D60279
llvm-svn: 359248
The Emscripten OS provides a definition of __EMSCRIPTEN__, and also that it
supports iprintf optimizations.
Also define small_printf optimizations, which is a printf with float support
but not long double (which in wasm can be useful since long doubles are 128
bit and force linking of float128 emulation code). This part is based on
sunfish's https://reviews.llvm.org/D57620 (which can't land yet since
the WASI integration isn't ready yet).
Differential Revision: https://reviews.llvm.org/D60167
llvm-svn: 357552
Summary:
This feature is not actually used for anything in the WebAssembly
backend, but adding it allows users to get it into the target features
sections of their objects, which makes these objects
future-compatible.
Reviewers: aheejin, dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, jdoerfert, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D60013
llvm-svn: 357321
Use the new cx8 feature flag that was added to the backend to represent support for cmpxchg8b. Use this flag to set the MaxAtomicInlineWidth.
This also assumes all the cmpxchg instructions are enabled for CK_Generic which is what cc1 defaults to when nothing is specified.
Differential Revision: https://reviews.llvm.org/D59566
llvm-svn: 356709
PentiumPro has HasNOPL set in the backend. i686 does not.
Despite having a function that looks like it canonicalizes alias names. It
doesn't seem to be called. So I don't think this is a functional change. But its
good to be consistent between the backend and frontend.
llvm-svn: 356537
llvm-svn 356197 relanded previously failing test case max_align.c.
This commit will reland the rest of llvm-svn 356060 commit.
Differential Revision: https://reviews.llvm.org/D59048
llvm-svn: 356208
Summary:
This define should correspond to CMPXCHG16B being available which requires 64-bit mode.
I checked and gcc also seems to only define this in 64-bit mode.
Reviewers: RKSimon, spatel, efriedma, jyknight, jfb
Reviewed By: jfb
Subscribers: jfb, cfe-commits, llvm-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D59287
llvm-svn: 356118
Summary:
A first pass over platform-specific properties of the C API/ABI
on AIX for both 32-bit and 64-bit modes.
This is a continuation of D18360 by Andrew Paprocki and further work by Wu Zhao.
Patch by Andus Yu
Reviewers: apaprocki, chandlerc, hubert.reinterpretcast, jasonliu,
xingxue, sfertile
Reviewed by: hubert.reinterpretcast, apaprocki, sfertile
Differential Revision: https://reviews.llvm.org/D59048
llvm-svn: 356060
Use this feature to fix a bug on ARM where 4 byte alignment is
incorrectly assumed.
Differential Revision: https://reviews.llvm.org/D57335
llvm-svn: 355685
Use this feature to fix a bug on ARM where 4 byte alignment is
incorrectly assumed.
Differential Revision: https://reviews.llvm.org/D57335
llvm-svn: 355585
Use this feature to fix a bug on ARM where 4 byte alignment is
incorrectly assumed.
Differential Revision: https://reviews.llvm.org/D57335
llvm-svn: 355522
This patch enables the following
1) AMD family 17h "znver2" tune flag (-march, -mcpu).
2) ISAs that are enabled for "znver2" architecture.
3) For the time being, it uses the znver1 scheduler model.
4) Tests are updated.
5) This patch is the clang counterpart to D58343
Reviewers: craig.topper
Tags: #clang
Differential Revision: https://reviews.llvm.org/D58344
llvm-svn: 354899
This adds ACLE-defined macros to test for code being compiled in the ROPI and
RWPI position-independence modes.
Differential revision: https://reviews.llvm.org/D23610
llvm-svn: 354265
Summary:
The predefined macro `_ARCH_PWR6X` is associated with GCC's
`-mcpu=power6x` option, which enables generation of P6 "raw mode"
instructions such as `mftgpr`.
Later POWER processors build upon the "architected mode", not the raw
one. `_ARCH_PWR6X` should not be defined for these later processors.
Fixes PR#40236.
Reviewers: echristo, hfinkel, kbarton, nemanjai, wschmidt
Reviewed By: hfinkel
Subscribers: jsji, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D58128
llvm-svn: 353975
The rationale of this change is to fix _Unwind_Word / _Unwind_SWord
definitions for MIPS N32 ABI. This ABI uses 32-bit pointers,
but _Unwind_Word and _Unwind_SWord types are eight bytes long.
# The __attribute__((__mode__(__unwind_word__))) is added to the type
definitions. It makes them equal to the corresponding definitions used
by GCC and allows to override types using `getUnwindWordWidth` function.
# The `getUnwindWordWidth` virtual function override in the `MipsTargetInfo`
class and provides correct type size values.
Differential revision: https://reviews.llvm.org/D58165
llvm-svn: 353965
This patch simply teach BPF driver about the new CPU "v3" introduced in
LLVM backend.
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 353479
Summary:
Added ability to generate correct debug info data about the variable
address class. Currently, for all the locals and globals the default
values are used, ADDR_local_space(6) for locals and ADDR_global_space(5)
for globals. The values are taken from the table in
https://docs.nvidia.com/cuda/archive/10.0/ptx-writers-guide-to-interoperability/index.html#cuda-specific-dwarf.
We need to emit correct data for address classes of, at least, shared
and constant globals. Currently, all these variables are treated by
the cuda-gdb debugger as the variables in the global address space
and, thus, it require manual data type casting.
Reviewers: echristo, probinson
Subscribers: jholewinski, aprantl, cfe-commits
Differential Revision: https://reviews.llvm.org/D57162
llvm-svn: 353204
rC352620 caused regressions because it copied floating point format from
aux target.
floating point format decides whether extended long double is supported.
It is x86_fp80 on x86 but IEEE double on amdgcn.
Document usage of long doubel type in HIP programming guide
https://github.com/ROCm-Developer-Tools/HIP/pull/890
Differential Revision: https://reviews.llvm.org/D57527
llvm-svn: 352801
In 64 bit MSVC environment size_t is defined as unsigned long long.
In single source language like HIP, data layout should be consistent
in device and host compilation, therefore copy data layout controlling
fields from Aux target for AMDGPU target.
Differential Revision: https://reviews.llvm.org/D56318
llvm-svn: 352620
As Discussed here:
http://lists.llvm.org/pipermail/llvm-dev/2019-January/129543.html
There are problems exposing the _Float16 type on architectures that
haven't defined the ABI/ISel for the type yet, so we're temporarily
disabling the type and making it opt-in.
Differential Revision: https://reviews.llvm.org/D57188
Change-Id: I5db7366dedf1deb9485adb8948b1deb7e612a736
llvm-svn: 352221
This adds a `__wasi__` macro for the wasi OS, similar to `__linux__` etc. for
other OS's.
Differential Revision: https://reviews.llvm.org/D57155
llvm-svn: 352105
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
a stray single '\r' from one file. These are the last line ending issues
I can find in the files containing parts of LLVM's file headers.
llvm-svn: 351634
Replace multiple comparisons of getOS() value with FreeBSD, NetBSD,
OpenBSD and DragonFly with matching isOS*BSD() methods. This should
improve the consistency of coding style without changing the behavior.
Direct getOS() comparisons were left whenever used in switch or switch-
like context.
Differential Revision: https://reviews.llvm.org/D55916
llvm-svn: 349752
The Darwin targets use `int64_t` and `uint64_t` to define the `int_least64_t`
and `int_fast64_t` types. The underlying type is actually a `long long`. Match
the types to allow the printf specifiers to work properly and have the compiler
vended macros match the implementation on the target.
llvm-svn: 348939
Summary:
The patch is to add the VSX register support for inline assembly. After this
patch, we can use VSX register in inline assembly clobber list without error.
Reviewed By: jsji, nemanjai
Differential Revision: https://reviews.llvm.org/D55192
llvm-svn: 348572
Support the Swift calling convention on Windows ARM and AArch64. Both
of these conform to the AAPCS, AAPCS64 calling convention, and LLVM has
been adjusted to account for the register usage. Ensure that the
frontend passes this into the backend. This allows the swift runtime to
be built for Windows.
llvm-svn: 348454
This patch addresses a compilation error with clang when
running in Haiku being unable to compile code using
float128 (throws compilation error such as 'float128 is
not supported on this target').
Patch by kallisti5 (Alexander von Gluck IV)
Differential Revision: https://reviews.llvm.org/D54901
llvm-svn: 348368
As of rev. 268898, clang supports __float128 on SystemZ. This seems to
have been in error. GCC has never supported __float128 on SystemZ,
since the "long double" type on the platform is already IEEE-128. (GCC
only supports __float128 on platforms where "long double" is some other
data type.)
For compatibility reasons this patch removes __float128 on SystemZ
again. The test case is updated accordingly.
llvm-svn: 348247
This adds Hurd toolchain support to Clang's driver in addition
to handling translating the triple from Hurd-compatible form to
the actual triple registered in LLVM.
(Phabricator was stripping the empty files from the patch so I
manually created them)
Patch by sthibaul (Samuel Thibault)
Differential Revision: https://reviews.llvm.org/D54379
llvm-svn: 347833
This is skylake-avx512 with the addition of avx512vnni ISA.
Patch by Jianping Chen
Differential Revision: https://reviews.llvm.org/D54792
llvm-svn: 347682
This patch should not introduce any behavior changes. It consists of
mostly one of two changes:
1. Replacing fall through comments with the LLVM_FALLTHROUGH macro
2. Inserting 'break' before falling through into a case block consisting
of only 'break'.
We were already using this warning with GCC, but its warning behaves
slightly differently. In this patch, the following differences are
relevant:
1. GCC recognizes comments that say "fall through" as annotations, clang
doesn't
2. GCC doesn't warn on "case N: foo(); default: break;", clang does
3. GCC doesn't warn when the case contains a switch, but falls through
the outer case.
I will enable the warning separately in a follow-up patch so that it can
be cleanly reverted if necessary.
Reviewers: alexfh, rsmith, lattner, rtrieu, EricWF, bollu
Differential Revision: https://reviews.llvm.org/D53950
llvm-svn: 345882
This silences a -Wimplicit-fallthrough warning from clang. GCC does not
appear to warn when the case body ends in a switch.
This is a somewhat surprising but intended fallthrough that I pulled out
from my mechanical patch. The code intends to handle 'Yi' and related
constraints as the 'x' constraint.
llvm-svn: 345873
We haven't supported compiling ObjC1 for a long time (and never will again), so
there isn't any reason to keep these separate. This patch replaces
LangOpts::ObjC1 and LangOpts::ObjC2 with LangOpts::ObjC.
Differential revision: https://reviews.llvm.org/D53547
llvm-svn: 345637
Generate the FP16FML intrinsics into arm_neon.h (AArch64 only for now).
Add two new type modifiers to NeonEmitter to handle the new prototypes.
Define __ARM_FEATURE_FP16FML when +fp16fml is enabled and guard the
intrinsics with the macro in arm_neon.h.
Based on a patch by Gao Yiling.
Differential Revision: https://reviews.llvm.org/D53633
llvm-svn: 345344
Similar to how ICC handles CPU-Dispatch on Windows, this patch uses the
resolver function directly to forward the call to the proper function.
This is not nearly as efficient as IFuncs of course, but is still quite
useful for large functions specifically developed for certain
processors.
This is unfortunately still limited to x86, since it depends on
__builtin_cpu_supports and __builtin_cpu_is, which are x86 builtins.
The naming for the resolver/forwarding function for cpu-dispatch was
taken from ICC's implementation, which uses the unmodified name for this
(no mangling additions). This is possible, since cpu-dispatch uses '.A'
for the 'default' version.
In 'target' multiversioning, this function keeps the '.resolver'
extension in order to keep the default function keeping the default
mangling.
Change-Id: I4731555a39be26c7ad59a2d8fda6fa1a50f73284
Differential Revision: https://reviews.llvm.org/D53586
llvm-svn: 345298
I'm unsure if KNL has this feature, but the backend never thought it did, only clang did. The predefined-arch-macros test lost the check for __RTM__ on KNL when it was removed Skylake CPUs in r344117.
I think we want to drop it from KNL for consistency with Skylake anyway regardless of how we got here.
llvm-svn: 344978
The `GNUABIN32` environment in a target triple implies using the N32
ABI. This patch adds support for this environment and switches on N32
ABI if necessary.
Patch by Patch by YunQiang Su.
Differential revision: https://reviews.llvm.org/D51464
llvm-svn: 344570
Summary:
gcc defines macros such as __code_model_small_ based on the user passed command line flag -mcmodel. clang accepts a flag with the same name and similar effects, but does not generate any macro that the user can use. This cl narrows the gap between gcc and clang behaviour.
However, achieving full compatibility with gcc is not trivial: The set of valid values for mcmodel in gcc and clang are not equal. Also, gcc defines different macros for different architectures. In this cl, we only tackle an easy part of the problem and define the macro only for x64 architecture. When the user does not specify a mcmodel, the macro for small code model is produced, as is the case with gcc.
Reviewers: compnerd, MaskRay
Reviewed By: MaskRay
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D52920
llvm-svn: 344000
These intrinsics exist in icc. They can be found on the Intel Intrinsics Guide website.
All the backend support is in place to pattern match a load+bswap or a bswap+store pattern to the MOVBE instructions. So we just need to get the frontend to emit the correct IR. The pointer arguments in icc are declared as void so I had to jump through a packed struct to forcing a specific alignment on the load/store. Same trick we use in the unaligned vector load/store intrinsics
Differential Revision: https://reviews.llvm.org/D52586
llvm-svn: 343343
My previous change (rL340911) set the two features for architectures
>= 6, which wrongly includes v6m. Now set to >= 6 and not Cortex-M.
Differential Revision: https://reviews.llvm.org/D52644
llvm-svn: 343309
This patch allows targetting Armv8.5-A from Clang. Most of the
implementation is in TargetParser, so this is mostly just adding tests.
Patch by Pablo Barrio!
Differential revision: https://reviews.llvm.org/D52491
llvm-svn: 343111
Windows uses `unsigned short` for `wint_t`. Correct the type definition as
vended by the compiler. This type is defined in corecrt.h and is
unconditionally typedef'ed. cl does not have an equivalent to `__WINT_TYPE__`
which is why this was never detected.
llvm-svn: 342557
The instruction set first appeared with Westmere, but not all processors
in that and the next few generations have the instructions. According to
Wikipedia[1], the first generation in which all SKUs have AES
instructions are Skylake and Goldmont. I can't find any Skylake,
Kabylake, Kabylake-R or Cannon Lake currently listed at
https://ark.intel.com that says "Intel® AES New Instructions" "No".
This matches GCC commit
https://gcc.gnu.org/ml/gcc-patches/2018-08/msg01940.html
[1] https://en.wikipedia.org/wiki/AES_instruction_set
Patch By: thiagomacieira
Differential Revision: https://reviews.llvm.org/D51510
llvm-svn: 341862
ARM_FEATURE_DSP is already set for targets with the +dsp feature. In
the backend, this target feature is also used to represent the
availability of the of the instructions that the ACLE guard through
the __ARM_FEATURE_SIMD32 macro. We don't have any cores that
implement one and not the other, so set this macro for cores later
than V6 or for Cortex-M cores that the target parser, or user, reports
that the 'dsp' instructions are supported.
Differential Revision: https://reviews.llvm.org/D51093
llvm-svn: 340911
As reported on http://lists.llvm.org/pipermail/cfe-dev/2018-August/058760.html,
this broke i386-freebsd11 due to its lack of atomic 64 bit primitives.
While that's not really this commit's fault, let's revert back to the old
behaviour until this can be fixed. This means generating cmpxchg8b etc for i386
and i486 which don't technically support those, but that's been the behaviour
for a long time, so a little longer probably doesn't hurt that much.
> Adjust MaxAtomicInlineWidth for i386/i486 targets.
>
> This is to fix the bug reported in https://bugs.llvm.org/show_bug.cgi?id=34347#c6.
> Currently, all MaxAtomicInlineWidth of x86-32 targets are set to 64. However,
> i386 doesn't support any cmpxchg related instructions. i486 only supports cmpxchg.
> So in this patch MaxAtomicInlineWidth is reset as follows:
> For i386, the MaxAtomicInlineWidth should be 0 because no cmpxchg is supported.
> For i486, the MaxAtomicInlineWidth should be 32 because it supports cmpxchg.
> For others 32 bits x86 cpu, the MaxAtomicInlineWidth should be 64 because of cmpxchg8b.
>
> Differential Revision: https://reviews.llvm.org/D42154
llvm-svn: 340666
subtarget features for indirect calls and indirect branches.
This is in preparation for enabling *only* the call retpolines when
using speculative load hardening.
I've continued to use subtarget features for now as they continue to
seem the best fit given the lack of other retpoline like constructs so
far.
The LLVM side is pretty simple. I'd like to eventually get rid of the
old feature, but not sure what backwards compatibility issues that will
cause.
This does remove the "implies" from requesting an external thunk. This
always seemed somewhat questionable and is now clearly not desirable --
you specify a thunk the same way no matter which set of things are
getting retpolines.
I really want to keep this nicely isolated from end users and just an
LLVM implementation detail, so I've moved the `-mretpoline` flag in
Clang to no longer rely on a specific subtarget feature by that name and
instead to be directly handled. In some ways this is simpler, but in
order to preserve existing behavior I've had to add some fallback code
so that users who relied on merely passing -mretpoline-external-thunk
continue to get the same behavior. We should eventually remove this
I suspect (we have never tested that it works!) but I've not done that
in this patch.
Differential Revision: https://reviews.llvm.org/D51150
llvm-svn: 340515
Set __mips_fpr to 0 if o32 ABI is used with either -mfpxx
or none of -mfp32, -mfpxx, -mfp64 being specified.
Introduce additional checks:
-mfpxx is only to be used in conjunction with the o32 ABI.
report an error when incompatible options are provided.
Formerly no errors were raised when combining n32/n64 ABIs
with -mfp32 and -mfpxx.
There are other cases when __mips_fpr should be set to 0
that are not covered, ex. using o32 on a mips64 cpu
which is valid but not supported in the backend as of yet.
Differential Revision: https://reviews.llvm.org/D50557
llvm-svn: 340391
Fast FMAF is not a sufficient condition to enable denormals.
Before VI, enabling denormals caused F32 instructions to
run at F64 speeds.
llvm-svn: 339278
The way address space declarations for builtins currently work
is nearly useless. The code assumes the address spaces used for
builtins is a confusingly named "target address space" from user
code using __attribute__((address_space(N))) that matches
the builtin declaration. There's no way to use this to declare
a builtin that returns a language specific address space.
The terminology used is highly cofusing since it has nothing
to do with the the address space selected by the target to use
for a language address space.
This feature is essentially unused as-is. AMDGPU and NVPTX
are the only in-tree targets attempting to use this. The AMDGPU
builtins certainly do not behave as intended (i.e. all of the
builtins returning pointers can never compile because the numbered
address space never matches the expected named address space).
The NVPTX builtins are missing tests for some, and the others
seem to rely on an implicit addrspacecast.
Change the used address space for builtins based on a target
hook to allow using a language address space for a builtin.
This allows the same builtin declaration to be used for multiple
languages with similarly purposed address spaces (e.g. the same
AMDGPU builtin can be used in OpenCL and CUDA even though the
constant address spaces are arbitarily different).
This breaks the possibility of using arbitrary numbered
address spaces alongside the named address spaces for builtins.
If this is an issue we probably need to introduce another builtin
declaration character to distinguish language address spaces from
so-called "target address spaces".
llvm-svn: 338707
This adds tests for Armv8.4-A, and also some v8.2 and v8.3 tests that were
missing.
Differential Revision: https://reviews.llvm.org/D50068
llvm-svn: 338525
Summary: Microsoft's C++ object model for ARM64 is the same as that for X86_64.
For example, small structs with non-trivial copy constructors or virtual
function tables are passed indirectly. Currently, they are passed in registers
when compiled with clang.
Reviewers: rnk, mstorsjo, TomTan, haripul, javed.absar
Reviewed By: rnk, mstorsjo
Subscribers: kristof.beyls, chrib, llvm-commits, cfe-commits
Differential Revision: https://reviews.llvm.org/D49770
llvm-svn: 338076
Changing it to unsigned long (which is 32-bit on wasm32) makes it the same
type as wasm64 (where unsigned long is 64-bit), which would eliminate the most
common cause for mangled names being different between wasm32 and wasm64. For
example, export lists containing symbol names could now often be the same
between wasm32 and wasm64.
Differential Revision: https://reviews.llvm.org/D40526
llvm-svn: 337783
As documented here: https://software.intel.com/en-us/node/682969 and
https://software.intel.com/en-us/node/523346. cpu_dispatch multiversioning
is an ICC feature that provides for function multiversioning.
This feature is implemented with two attributes: First, cpu_specific,
which specifies the individual function versions. Second, cpu_dispatch,
which specifies the location of the resolver function and the list of
resolvable functions.
This is valuable since it provides a mechanism where the resolver's TU
can be specified in one location, and the individual implementions
each in their own translation units.
The goal of this patch is to be source-compatible with ICC, so this
implementation diverges from the ICC implementation in a few ways:
1- Linux x86/64 only: This implementation uses ifuncs in order to
properly dispatch functions. This is is a valuable performance benefit
over the ICC implementation. A future patch will be provided to enable
this feature on Windows, but it will obviously more closely fit ICC's
implementation.
2- CPU Identification functions: ICC uses a set of custom functions to identify
the feature list of the host processor. This patch uses the cpu_supports
functionality in order to better align with 'target' multiversioning.
1- cpu_dispatch function def/decl: ICC's cpu_dispatch requires that the function
marked cpu_dispatch be an empty definition. This patch supports that as well,
however declarations are also permitted, since the linker will solve the
issue of multiple emissions.
Differential Revision: https://reviews.llvm.org/D47474
llvm-svn: 337552
The SPIR target currently allows for half precision floating point types to be
emitted using the LLVM intrinsic functions which convert half types to floats
and doubles. However, this is illegal in SPIR as the only intrinsic allowed by
SPIR is memcpy, as per section 3 of the SPIR specification. Currently this is
leading to an assert being hit in the Clang CodeGen when attempting to emit a
constant or literal _Float16 type in a comparison operation on a SPIR or SPIR64
target. This assert stems from the CodeGen attempting to emit a constant half
value as an integer because the backend has specified that it is using these
half conversion intrinsics (which represents half as i16). This patch prevents
SPIR targets from using these intrinsics by overloading the responsible target
info method, marks SPIR targets as having a legal half type and provides
additional regression testing for the _Float16 type on SPIR targets.
Patch by: Stephen McGroarty
Differential Revision: https://reviews.llvm.org/D48188
llvm-svn: 335111
Diasble the use of the type __float128 for PPC machines older
than Power9.
The use of -mfloat128 for PPC machine older than Power9 will result
in an error.
Differential Revision: https://reviews.llvm.org/D48088
llvm-svn: 334613
Adding __attribute__((aligned(32))) to __m256 breaks the implementation
of _mm256_loadu_ps on Windows. On Windows, alignment attributes have
higher precedence than packing attributes.
We also might want to carefully consider the consequences of changing
our vector typedefs, since many users copy them and invent their own
new, non-Intel specific vector type names.
llvm-svn: 333958
This fixes two major problems:
- We were not capping vector alignment as desired on 32-bit ARM.
- We were using different alignments based on the AVX settings on
Intel, so we did not have a consistent ABI.
This is an ABI break, but we think we can get away with it because
vectors tend to be used mostly in inline code (which is why not having
a consistent ABI has not proven disastrous on Intel).
Intel's AVX types are specified as having 32-byte / 64-byte alignment,
so align them explicitly instead of relying on the base ABI rule.
Note that this sort of attribute is stripped from template arguments
in template substitution, so there's a possibility that code templated
over vectors will produce inadequately-aligned objects. The right
long-term solution for this is for alignment attributes to be
interpreted as true qualifiers and thus preserved in the canonical type.
llvm-svn: 333791
An intrinsic for an old instruction, as described in the Intel SDM.
Reviewers: craig.topper, rnk
Reviewed By: craig.topper, rnk
Differential Revision: https://reviews.llvm.org/D47142
llvm-svn: 333256
in gcc by https://gcc.gnu.org/ml/gcc-cvs/2018-04/msg00534.html.
The -mibt feature flag is being removed, and the -fcf-protection
option now also defines a CET macro and causes errors when used
on non-X86 targets, while X86 targets no longer check for -mibt
and -mshstk to determine if -fcf-protection is supported. -mshstk
is now used only to determine availability of shadow stack intrinsics.
Comes with an LLVM patch (D46882).
Patch by mike.dvoretsky
Differential Revision: https://reviews.llvm.org/D46881
llvm-svn: 332704
When looking at lib/Basic/Targets/OSTargets.h, I noticed that _REENTRANT is defined
unconditionally on Solaris, unlike all other targets and what either Studio cc (only define
it with -mt) or gcc (only define it with -pthread) do.
This patch follows that lead.
Differential Revision: https://reviews.llvm.org/D41241
llvm-svn: 332343
The option enables use of 32-bit pointers for accessing
const/local/shared memory. The feature is disabled by default.
Differential Revision: https://reviews.llvm.org/D46148
llvm-svn: 331938
This is similar to the LLVM change https://reviews.llvm.org/D46290.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
Differential Revision: https://reviews.llvm.org/D46320
llvm-svn: 331834
Summary:
The getConstraintRegister method is used by semantic checking of
inline assembly statements in order to diagnose conflicts between
clobber list and input/output lists. Currently ARM and AArch64 don't
override getConstraintRegister, so conflicts between registers
assigned to variables in asm labels and clobber lists are not
diagnosed. Such conflicts can cause assertion failures in the back end
and even miscompilations.
This patch implements getConstraintRegister for ARM and AArch64
targets. Since these targets don't have single-register constraints,
the implementation is trivial and just returns the register specified
in an asm label (if any).
Reviewers: eli.friedman, javed.absar, thopre
Reviewed By: thopre
Subscribers: rengolin, eraman, rogfer01, myatsina, kristof.beyls, cfe-commits, chrib
Differential Revision: https://reviews.llvm.org/D45965
llvm-svn: 331164
This adds a pre-defined macro to test if the compiler has support for the
v8.2-A dot rpoduct intrinsics in AArch32 mode.
The AAcrh64 equivalent has already been added by rL330229.
The ACLE spec which describes this macro hasn't been published yet, but this is
based on the final internal draft, and GCC has already implemented this.
Differential revision: https://reviews.llvm.org/D46108
llvm-svn: 331038
When rebasing https://reviews.llvm.org/D40898 with GCC 5.4 on Solaris 11.4, I ran
into a few instances of
In file included from /vol/llvm/src/compiler-rt/local/test/asan/TestCases/Posix/asan-symbolize-sanity-test.cc:19:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/string:40:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/bits/char_traits.h:39:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/bits/stl_algobase.h:64:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/bits/stl_pair.h:59:
In file included from /usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/bits/move.h:57:
/usr/gcc/5/lib/gcc/x86_64-pc-solaris2.11/5.4.0/../../../../include/c++/5.4.0/type_traits:311:39: error: __float128 is not supported on this target
struct __is_floating_point_helper<__float128>
^
during make check-all. The line above is inside
#if !defined(__STRICT_ANSI__) && defined(_GLIBCXX_USE_FLOAT128)
template<>
struct __is_floating_point_helper<__float128>
: public true_type { };
#endif
While the libstdc++ header indicates support for __float128, clang does not, but
should. The following patch implements this and fixed those errors.
Differential Revision: https://reviews.llvm.org/D41240
llvm-svn: 330572
Currently, the interaction between the triple, the CPU, and the
supported features is a mess: the driver edits the triple to indicate
the supported architecture version, and the LLVM backend uses this to
figure out what instructions are legal. This makes it difficult to
understand what's happening, and makes it impossible to LTO together two
modules with different computed architectures.
Instead of relying on triple rewriting to get the correct target
features, we should add the right target features explicitly.
Differential Revision: https://reviews.llvm.org/D45240
llvm-svn: 330169
The WBNOINVD instruction writes back all modified
cache lines in the processor’s internal cache to main memory
but does not invalidate (flush) the internal caches.
Reviewers: craig.topper, zvi, ashlykov
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D43817
llvm-svn: 329848
When NVPTX TARGET_BUILTIN specifies sm_XX or ptxYY as required feature,
consider those features available if we're compiling for GPU >= sm_XX or have
enabled PTX version >= ptxYY.
Differential Revision: https://reviews.llvm.org/D45061
llvm-svn: 329829
Sometimes when people compile bpf programs with
"clang ... -target bpf ...", the kernel header
files may contain host arch inline assembly codes
as in the patch https://patchwork.kernel.org/patch/10119683/
by Arnaldo Carvaldo de Melo.
The current workaround in the above patch
is to guard the inline assembly with "#ifndef __BPF__"
marco. So when __BPF__ is defined, these macros will
have no use.
Such a method is not extensible. As a matter of fact,
most of these inline assembly codes will be thrown away
at the end of clang compilation.
So for bpf target, this patch accepts all asm register
names in clang AST stage. The name will be checked
again during llc code generation if the inline assembly
code is indeed for bpf programs.
With this patch, the above "#ifndef __BPF__" is not needed
any more in https://patchwork.kernel.org/patch/10119683/.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 329823
amdgcn targets only support HIP, which does not define __CUDA_ARCH__.
this is a partial unroll of r329232 / D45277.
Differential Revision: https://reviews.llvm.org/D45387
llvm-svn: 329584
Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:
archtype
cas
classs
checkk
compres
definit
frome
iff
inteval
ith
lod
methode
nd
optin
ot
pres
statics
te
thru
Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)
Differential revision: https://reviews.llvm.org/D44188
llvm-svn: 329399
Summary:
This patch extend getTargetDefines and implement handleTargetFeatures
and hasFeature. and define corresponding marco for those features.
Reviewers: asb, apazos, eli.friedman
Differential Revision: https://reviews.llvm.org/D44727
Patch by Kito Cheng.
llvm-svn: 329278
Microsoft has reserved 'U' for the PreserveMostCC which is used in the
swift runtime. Add support for this. This allows the swift runtime to
be built for Windows again.
llvm-svn: 329025
Summary:
Allow rN registers to be simply parsed as correspoing xN registers.
The "register ... asm("rN")" is an command to the
compiler's register allocator, not an operand to any individual assembly
instruction. GCC documents this syntax as "...the name of the register
that should be used."
This is needed to support the changes in Linux kernel (see
https://lkml.org/lkml/2018/3/1/268 )
Note: This will add support only for the limited use case of
register ... asm("rN"). Any other uses that make rN leak into assembly
are not supported.
Reviewers: kristof.beyls, rengolin, peter.smith, t.p.northover
Reviewed By: peter.smith
Subscribers: javed.absar, eraman, cfe-commits, srhines
Differential Revision: https://reviews.llvm.org/D44815
llvm-svn: 328829
ObjC and ObjC++ pass non-trivial structs in a way that is incompatible
with each other. For example:
typedef struct {
id f0;
__weak id f1;
} S;
// this code is compiled in c++.
extern "C" {
void foo(S s);
}
void caller() {
// the caller passes the parameter indirectly and destructs it.
foo(S());
}
// this function is compiled in c.
// 'a' is passed directly and is destructed in the callee.
void foo(S a) {
}
This patch fixes the incompatibility by passing and returning structs
with __strong or weak fields using the C ABI in C++ mode. __strong and
__weak fields in a struct do not cause the struct to be destructed in
the caller and __strong fields do not cause the struct to be passed
indirectly.
Also, this patch fixes the microsoft ABI bug mentioned here:
https://reviews.llvm.org/D41039?id=128767#inline-364710
rdar://problem/38887866
Differential Revision: https://reviews.llvm.org/D44908
llvm-svn: 328731
Need to override convertConstraint to recognise amdgpu specific register names.
Differential Revision: https://reviews.llvm.org/D44533
llvm-svn: 328359
For generating NEON intrinsics, this determines the NEON data type, and whether
it should be a half type or an i16 type. I.e., we always pass a half type for
AArch64, this hasn't changed, but now also for ARM but only when FullFP16 is
enabled, and i16 otherwise.
This is intended to be non-functional change, but together with the backend
work in D44538 which adds support for f16 vectors, this enables adding the
AArch32 FP16 (vector) intrinsics.
Differential Revision: https://reviews.llvm.org/D44561
llvm-svn: 327836
This is a partial recommit of r327189 that was reverted
due to test issues. I.e., this recommits minimal functional
change, the FP16 feature test macros, and adds tests that
were missing in the original commit.
llvm-svn: 327455
- Expand GK_*s (i.e. GFX6 -> GFX600, GFX601, etc.)
- This allows us to choose features correctly in some cases (for example, fast fmaf is available on gfx600, but not gfx601)
- Move HasFMAF, HasFP64, HasLDEXPF to GPUInfo tables
- Add HasFastFMA, HasFastFMAF to GPUInfo tables
- Add missing tests
llvm-svn: 326254
Summary:
If the flag -fforce-enable-int128 is passed, it will enable support for __int128_t and __uint128_t types.
This flag can then be used to build compiler-rt for RISCV32.
Reviewers: asb, kito-cheng, apazos, efriedma
Reviewed By: asb, efriedma
Subscribers: shiva0217, efriedma, jfb, dschuff, sdardis, sbc100, jgravelle-google, aheejin, rbar, johnrusso, simoncook, jordy.potman.lists, sabuasal, niosHD, cfe-commits
Differential Revision: https://reviews.llvm.org/D43105
llvm-svn: 326045
LLVM has supported a new target feature "alu32" which could be enabled or
disabled by "-mattr=[+|-]alu32" when using llc.
This patch link Clang with it, so it could be also done by passing related
options to Clang, for example:
-Xclang -target-feature -Xclang +alu32
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Yonghong Song <yhs@fb.com>
llvm-svn: 325996
Cannon Lake does not support CLWB, therefore it
does not include all features listed under SKX.
Patch by Gabor Buella
Differential Revision: https://reviews.llvm.org/D43459
llvm-svn: 325655
This patch provides mitigation for CVE-2017-5715, Spectre variant two,
which affects the P5600 and P6600. It provides the option
-mindirect-jump=hazard, which instructs the LLVM backend to replace
indirect branches with their hazard barrier variants.
This option is accepted when targeting MIPS revision two or later.
The migitation strategy suggested by MIPS for these processors is to
use two hazard barrier instructions. 'jalr.hb' and 'jr.hb' are hazard
barrier variants of the 'jalr' and 'jr' instructions respectively.
These instructions impede the execution of instruction stream until
architecturally defined hazards (changes to the instruction stream,
privileged registers which may affect execution) are cleared. These
instructions in MIPS' designs are not speculated past.
These instructions are used with the option -mindirect-jump=hazard
when branching indirectly and for indirect function calls.
These instructions are defined by the MIPS32R2 ISA, so this mitigation
method is not compatible with processors which implement an earlier
revision of the MIPS ISA.
Implementation note: I've opted to provide this as an
-mindirect-jump={hazard,...} style option in case alternative
mitigation methods are required for other implementations of the MIPS
ISA in future, e.g. retpoline style solutions.
Reviewers: atanasyan
Differential Revision: https://reviews.llvm.org/D43487
llvm-svn: 325651
Summary:
Make clang accept `-msahf` (and `-mno-sahf`) flags to activate the
`+sahf` feature for the backend, for bug 36028 (Incorrect use of
pushf/popf enables/disables interrupts on amd64 kernels). This was
originally submitted in bug 36037 by Jonathan Looney
<jonlooney@gmail.com>.
As described there, GCC also uses `-msahf` for this feature, and the
backend already recognizes the `+sahf` feature. All that is needed is to
teach clang to pass this on to the backend.
The mapping of feature support onto CPUs may not be complete; rather, it
was chosen to match LLVM's idea of which CPUs support this feature (see
lib/Target/X86/X86.td).
I also updated the affected test case (CodeGen/attr-target-x86.c) to
match the emitted output.
Reviewers: craig.topper, coby, efriedma, rsmith
Reviewed By: craig.topper
Subscribers: emaste, cfe-commits
Differential Revision: https://reviews.llvm.org/D43394
llvm-svn: 325446
Apparently storing the pointer to a StringLiteral as
a StringRef caused this section of code to issue a ubsan
warning. This will hopefully fix that.
llvm-svn: 324687
What seems to be a bug in older versions of MSVC, constexpr
member arrays with a redefinition (to force emission) require
their initial definition to have the size between the brackets.
llvm-svn: 324682
When rejecting a march= or target-cpu command line parameter,
the message is quite lacking. This patch adds a note that prints
all possible values for the current target, if the target supports it.
This adds support for the ARM/AArch64 targets (more to come!).
Differential Revision: https://reviews.llvm.org/D42978
llvm-svn: 324673
Clang can use CUDA-9.1 now, though new APIs (are not implemented yet.
The major change is that headers in CUDA-9.1 went through substantial
changes that started in CUDA-9.0 which required substantial changes
in the cuda compatibility headers provided by clang.
There are two major issues:
* CUDA SDK no longer provides declarations for libdevice functions.
* A lot of device-side functions have become nvcc's builtins and
CUDA headers no longer contain their implementations.
This patch changes the way CUDA headers are handled if we compile
with CUDA 9.x. Both 9.0 and 9.1 are affected.
* Clang provides its own declarations of libdevice functions.
* For CUDA-9.x clang now provides implementation of device-side
'standard library' functions using libdevice.
This patch should not affect compilation with CUDA-8. There may be
some observable differences for CUDA-9.0, though they are not expected
to affect functionality.
Tested: CUDA test-suite tests for all supported combinations of:
CUDA: 7.0,7.5,8.0,9.0,9.1
GPU: sm_20, sm_35, sm_60, sm_70
Differential Revision: https://reviews.llvm.org/D42513
llvm-svn: 323713
This is to fix the bug reported in https://bugs.llvm.org/show_bug.cgi?id=34347#c6.
Currently, all MaxAtomicInlineWidth of x86-32 targets are set to 64. However,
i386 doesn't support any cmpxchg related instructions. i486 only supports cmpxchg.
So in this patch MaxAtomicInlineWidth is reset as follows:
For i386, the MaxAtomicInlineWidth should be 0 because no cmpxchg is supported.
For i486, the MaxAtomicInlineWidth should be 32 because it supports cmpxchg.
For others 32 bits x86 cpu, the MaxAtomicInlineWidth should be 64 because of cmpxchg8b.
Differential Revision: https://reviews.llvm.org/D42154
llvm-svn: 323281
Summary:
First, we need to explain the core of the vulnerability. Note that this
is a very incomplete description, please see the Project Zero blog post
for details:
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
The basis for branch target injection is to direct speculative execution
of the processor to some "gadget" of executable code by poisoning the
prediction of indirect branches with the address of that gadget. The
gadget in turn contains an operation that provides a side channel for
reading data. Most commonly, this will look like a load of secret data
followed by a branch on the loaded value and then a load of some
predictable cache line. The attacker then uses timing of the processors
cache to determine which direction the branch took *in the speculative
execution*, and in turn what one bit of the loaded value was. Due to the
nature of these timing side channels and the branch predictor on Intel
processors, this allows an attacker to leak data only accessible to
a privileged domain (like the kernel) back into an unprivileged domain.
The goal is simple: avoid generating code which contains an indirect
branch that could have its prediction poisoned by an attacker. In many
cases, the compiler can simply use directed conditional branches and
a small search tree. LLVM already has support for lowering switches in
this way and the first step of this patch is to disable jump-table
lowering of switches and introduce a pass to rewrite explicit indirectbr
sequences into a switch over integers.
However, there is no fully general alternative to indirect calls. We
introduce a new construct we call a "retpoline" to implement indirect
calls in a non-speculatable way. It can be thought of loosely as
a trampoline for indirect calls which uses the RET instruction on x86.
Further, we arrange for a specific call->ret sequence which ensures the
processor predicts the return to go to a controlled, known location. The
retpoline then "smashes" the return address pushed onto the stack by the
call with the desired target of the original indirect call. The result
is a predicted return to the next instruction after a call (which can be
used to trap speculative execution within an infinite loop) and an
actual indirect branch to an arbitrary address.
On 64-bit x86 ABIs, this is especially easily done in the compiler by
using a guaranteed scratch register to pass the target into this device.
For 32-bit ABIs there isn't a guaranteed scratch register and so several
different retpoline variants are introduced to use a scratch register if
one is available in the calling convention and to otherwise use direct
stack push/pop sequences to pass the target address.
This "retpoline" mitigation is fully described in the following blog
post: https://support.google.com/faqs/answer/7625886
We also support a target feature that disables emission of the retpoline
thunk by the compiler to allow for custom thunks if users want them.
These are particularly useful in environments like kernels that
routinely do hot-patching on boot and want to hot-patch their thunk to
different code sequences. They can write this custom thunk and use
`-mretpoline-external-thunk` *in addition* to `-mretpoline`. In this
case, on x86-64 thu thunk names must be:
```
__llvm_external_retpoline_r11
```
or on 32-bit:
```
__llvm_external_retpoline_eax
__llvm_external_retpoline_ecx
__llvm_external_retpoline_edx
__llvm_external_retpoline_push
```
And the target of the retpoline is passed in the named register, or in
the case of the `push` suffix on the top of the stack via a `pushl`
instruction.
There is one other important source of indirect branches in x86 ELF
binaries: the PLT. These patches also include support for LLD to
generate PLT entries that perform a retpoline-style indirection.
The only other indirect branches remaining that we are aware of are from
precompiled runtimes (such as crt0.o and similar). The ones we have
found are not really attackable, and so we have not focused on them
here, but eventually these runtimes should also be replicated for
retpoline-ed configurations for completeness.
For kernels or other freestanding or fully static executables, the
compiler switch `-mretpoline` is sufficient to fully mitigate this
particular attack. For dynamic executables, you must compile *all*
libraries with `-mretpoline` and additionally link the dynamic
executable and all shared libraries with LLD and pass `-z retpolineplt`
(or use similar functionality from some other linker). We strongly
recommend also using `-z now` as non-lazy binding allows the
retpoline-mitigated PLT to be substantially smaller.
When manually apply similar transformations to `-mretpoline` to the
Linux kernel we observed very small performance hits to applications
running typical workloads, and relatively minor hits (approximately 2%)
even for extremely syscall-heavy applications. This is largely due to
the small number of indirect branches that occur in performance
sensitive paths of the kernel.
When using these patches on statically linked applications, especially
C++ applications, you should expect to see a much more dramatic
performance hit. For microbenchmarks that are switch, indirect-, or
virtual-call heavy we have seen overheads ranging from 10% to 50%.
However, real-world workloads exhibit substantially lower performance
impact. Notably, techniques such as PGO and ThinLTO dramatically reduce
the impact of hot indirect calls (by speculatively promoting them to
direct calls) and allow optimized search trees to be used to lower
switches. If you need to deploy these techniques in C++ applications, we
*strongly* recommend that you ensure all hot call targets are statically
linked (avoiding PLT indirection) and use both PGO and ThinLTO. Well
tuned servers using all of these techniques saw 5% - 10% overhead from
the use of retpoline.
We will add detailed documentation covering these components in
subsequent patches, but wanted to make the core functionality available
as soon as possible. Happy for more code review, but we'd really like to
get these patches landed and backported ASAP for obvious reasons. We're
planning to backport this to both 6.0 and 5.0 release streams and get
a 5.0 release with just this cherry picked ASAP for distros and vendors.
This patch is the work of a number of people over the past month: Eric, Reid,
Rui, and myself. I'm mailing it out as a single commit due to the time
sensitive nature of landing this and the need to backport it. Huge thanks to
everyone who helped out here, and everyone at Intel who helped out in
discussions about how to craft this. Also, credit goes to Paul Turner (at
Google, but not an LLVM contributor) for much of the underlying retpoline
design.
Reviewers: echristo, rnk, ruiu, craig.topper, DavidKreitzer
Subscribers: sanjoy, emaste, mcrosier, mgorny, mehdi_amini, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D41723
llvm-svn: 323155
As RV64 codegen has not yet been upstreamed into LLVM, we focus on RV32 driver
support (RV64 to follow).
Differential Revision: https://reviews.llvm.org/D39963
llvm-svn: 322276
Similarly, make -mno-fma and -mno-f16c imply -mno-avx512f.
Withou this "-mno-sse -mavx512f" ends up with avx512f being enabled in the frontend but disabled in the backend.
llvm-svn: 322245
Cf-protection is a target independent flag that instructs the back-end to instrument control flow mechanisms like: Branch, Return, etc.
For example in X86 this flag will be used to instrument Indirect Branch Tracking instructions.
Differential Revision: https://reviews.llvm.org/D40478
Change-Id: I5126e766c0e6b84118cae0ee8a20fe78cc373dea
llvm-svn: 322063
GCC's attribute 'target', in addition to being an optimization hint,
also allows function multiversioning. We currently have the former
implemented, this is the latter's implementation.
This works by enabling functions with the same name/signature to coexist,
so that they can all be emitted. Multiversion state is stored in the
FunctionDecl itself, and SemaDecl manages the definitions.
Note that it ends up having to permit redefinition of functions so
that they can all be emitted. Additionally, all versions of the function
must be emitted, so this also manages that.
Note that this includes some additional rules that GCC does not, since
defining something as a MultiVersion function after a usage has been made illegal.
The only 'history rewriting' that happens is if a function is emitted before
it has been converted to a multiversion'ed function, at which point its name
needs to be changed.
Function templates and virtual functions are NOT yet supported (not supported
in GCC either).
Additionally, constructors/destructors are disallowed, but the former is
planned.
llvm-svn: 322028
I based that commit on what was in Intel's public documentation here https://software.intel.com/sites/default/files/managed/c5/15/architecture-instruction-set-extensions-programming-reference.pdf
Which specifically said CLWB wasn't until Icelake.
But I've since cross checked with SDE and it thinks these features exist on CNL and ICL. So now I don't know what to believe.
I've added test coverage of the current behavior as part of the revert so at least now have proof of what we're doing.
llvm-svn: 321547
We have cannonlake and icelake inheriting from skylake server in a switch using fallthroughs. But they aren't perfect supersets of skylake server.
llvm-svn: 321504
added vbmi2 feature recognition
added intrinsics support for vbmi2 instructions
_mm[128,256,512]_mask[z]_compress_epi[16,32]
_mm[128,256,512]_mask_compressstoreu_epi[16,32]
_mm[128,256,512]_mask[z]_expand_epi[16,32]
_mm[128,256,512]_mask[z]_expandloadu_epi[16,32]
_mm[128,256,512]_mask[z]_sh[l,r]di_epi[16,32,64]
_mm[128,256,512]_mask_sh[l,r]dv_epi[16,32,64]
matching a similar work on the backend (D40206)
Differential Revision: https://reviews.llvm.org/D41557
llvm-svn: 321487
added vpclmulqdq feature recognition
added intrinsics support for vpclmulqdq instructions
_mm256_clmulepi64_epi128
_mm512_clmulepi64_epi128
matching a similar work on the backend (D40101)
Differential Revision: https://reviews.llvm.org/D41573
llvm-svn: 321480
added vaes feature recognition
added intrinsics support for vaes instructions, matching a similar work on the backend (D40078)
_mm256_aesenc_epi128
_mm512_aesenc_epi128
_mm256_aesenclast_epi128
_mm512_aesenclast_epi128
_mm256_aesdec_epi128
_mm512_aesdec_epi128
_mm256_aesdeclast_epi128
_mm512_aesdeclast_epi128
llvm-svn: 321474
https://bugs.llvm.org/show_bug.cgi?id=35721 reports that x86intrin.h
is issuing a few warnings. This is because attribute target is using
isValidFeatureName for its source. It was also discovered that two of
these were missing from hasFeature.
Additionally, shstk is and ibu are reordered alphabetically, as came
up during code review.
llvm-svn: 321324