by erasing the soft-float target feature if the rest of the front
end added it because of defaults or the soft float option.
Add some testing for some of the targets that implement this hack.
llvm-svn: 236179
This issue was fixed elsewhere in r235396 in a more general way, hence these
changes no longer do anything. Keep the testcase however, to ensure that we
don't regress this for ARM.
llvm-svn: 236104
When creating a global variable with a type of a struct with bitfields, we must
forcibly set the alignment of the global from the RecordDecl. We must do this so
that the proper bitfield alignment makes its way down to LLVM, since clang will
mangle the bitfields into one large type.
llvm-svn: 235976
The GCC construct __attribute__((aligned)) is defined to set alignment
to "the default alignment for the target architecture" according to
the GCC documentation:
The default alignment is sufficient for all scalar types, but may not be
enough for all vector types on a target that supports vector operations.
The default alignment is fixed for a particular target ABI.
clang currently hard-coded an alignment of 16 bytes for that construct,
which is correct on some platforms (including X86), but wrong on others
(including SystemZ). Since this value is ABI-relevant, it is important
to get correct for compatibility purposes.
This patch adds a new TargetInfo member "DefaultAlignForAttributeAligned"
that targets can set to the appropriate default __attribute__((aligned))
value.
Note that I'm deliberately *not* using the existing "SuitableAlign"
value, which is used to set the pre-defined macro __BIGGEST_ALIGNMENT__,
since those two values may not be the same on all platforms. In fact,
on X86, __attribute__((aligned)) always uses 16-byte alignment, while
__BIGGEST_ALIGNMENT__ may be larger if AVX-2 or AVX-512 are supported.
(This is actually not yet correctly implemented in clang either.)
The patch provides a value for DefaultAlignForAttributeAligned only for
SystemZ, and leaves the default for all other targets at 16, which means
no visible change in behavior on all other targets. (The value is still
wrong for some other targets, but I'd prefer to leave it to the target
maintainers for those platforms to fix.)
llvm-svn: 235397
This patch corresponds to review:
http://reviews.llvm.org/D8930
This just adds a front end option to let the back end know the target has PPC
direct move instructions.
llvm-svn: 234683
This patch corresponds to review:
http://reviews.llvm.org/D8398
It adds some builtin functions to access the extended divide and bit permute instructions.
llvm-svn: 234547
Adds ARM Cortex-R4 and R4F support and tests in Clang. Though Cortex-R4
support was present, the support for hwdiv in thumb-mode was not defined
or tested properly. This has also been added.
llvm-svn: 234488
Do the same thing as win64. If we're not using COFF, use the ELF
manglings. Maybe if we are targetting *-windows-msvc-macho, we should
use darwin manglings, but I don't need to stir that pot today.
llvm-svn: 233819
This should fix build-bot failures after r233804.
The patch also adds a "systemz" feature, and renames the
"transactional-execution" feature to "htm", since it turns
out "-" is not a legal character in module feature names.
llvm-svn: 233807
The zEC12 provides the transactional-execution facility. This is exposed
to users via a set of builtin routines on other compilers. This patch
adds clang support to enable those builtins. In partciular, the patch:
- enables the transactional-execution feature by default on zEC12
- allows to override presence of that feature via the -mhtm/-mno-htm options
- adds a predefined macro __HTM__ if the feature is enabled
- adds support for the transactional-execution GCC builtins
- adds Sema checking to verify the __builtin_tabort abort code
- adds the s390intrin.h header file (for GCC compatibility)
- adds s390 sections to the htmintrin.h and htmxlintrin.h header files
Since this is first use of target-specific intrinsics on the platform,
the patch creates the include/clang/Basic/BuiltinsSystemZ.def file and
hooks it up in TargetBuiltins.h and lib/Basic/Targets.cpp.
An associated LLVM patch adds the required LLVM IR intrinsics.
For reference, the transactional-execution instructions are documented
in the z/Architecture Principles of Operation for the zEC12:
http://publibfp.boulder.ibm.com/cgi-bin/bookmgr/download/DZ9ZR009.pdf
The associated builtins are documented in the GCC manual:
http://gcc.gnu.org/onlinedocs/gcc/S_002f390-System-z-Built-in-Functions.html
The htmxlintrin.h intrinsics provided for compatibility with the IBM XL
compiler are documented in the "z/OS XL C/C++ Programming Guide".
llvm-svn: 233804
Add Tool and ToolChain support for clang to target the NaCl OS using the NaCl
SDK for x86-32, x86-64 and ARM.
Includes nacltools::Assemble and Link which are derived from gnutools. They
are similar to Linux but different enought that they warrant their own class.
Also includes a NaCl_TC in ToolChains derived from Generic_ELF with library
and include paths suitable for an SDK and independent of the system tools.
Differential Revision: http://reviews.llvm.org/D8590
llvm-svn: 233594
Like on other 64-bit platforms, Int64Type should be SignedLong
on SystemZ, not SignedLongLong as per default. This could cause
ABI incompatibilities in certain cases (e.g. name mangling).
llvm-svn: 233544
they enable/disable.
This fixes two things:
a) sse4 isn't actually a target feature, don't treat it as one.
b) we weren't correctly disabling sse4.1 when we'd pass -mno-sse4
after enabling it, thus passing preprocessor directives and
(soon) passing the function attribute as well when we shouldn't.
llvm-svn: 233223
This patch adds Hardware Transaction Memory (HTM) support supported by ISA 2.07
(POWER8). The intrinsic support is based on GCC one [1], with both 'PowerPC HTM
Low Level Built-in Functions' and 'PowerPC HTM High Level Inline Functions'
implemented.
Along with builtins a new driver switch is added to enable/disable HTM
instruction support (-mhtm) and a header with common definitions (mostly to
parse the TFHAR register value). The HTM switch also sets a preprocessor builtin
HTM.
The HTM usage requires a recently newer kernel with PPC HTM enabled. Tested on
powerpc64 and powerpc64le.
This is send along a llvm patch to enabled the builtins and option switch.
[1]
https://gcc.gnu.org/onlinedocs/gcc/PowerPC-Hardware-Transactional-Memory-Built-in-Functions.html
Phabricator Review: http://reviews.llvm.org/D8248
llvm-svn: 233205
On android x86_32 the long double is only 64 bits (compared to 80 bits
on linux x86_32) and on android x86_64 the long double is IEEEquad
(compared to x87DoubleExtended on linux x86_64). This CL creates new
TargetInfo classes for this targets to represent these differences.
Differential revision: http://reviews.llvm.org/D8357
llvm-svn: 233177
ARMv6K is another layer between ARMV6 and ARMV6T2. This is the Clang
side of the changes.
ARMV6 family LLVM implementation.
+-------------------------------------+
| ARMV6 |
+----------------+--------------------+
| ARMV6M (thumb) | ARMV6K (arm,thumb) | <- From ARMV6K and ARMV6M processors
+----------------+--------------------+ have support for hint instructions
| ARMV6T2 (arm,thumb,thumb2) | (SEV/WFE/WFI/NOP/YIELD). They can
+-------------------------------------+ be either real or default to NOP.
| ARMV7 (arm,thumb,thumb2) | The two processors also use
+-------------------------------------+ different encoding for them.
Patch by Vinicius Tinti.
llvm-svn: 232469
Support for the QPX vector instruction set, used on the IBM BG/Q supercomputer,
has recently been added to the LLVM PowerPC backend. This vector instruction
set requires some ABI modifications because the ABI on the BG/Q expects
<4 x double> vectors to be provided with 32-byte stack alignment, and to be
handled as native vector types (similar to how Altivec vectors are handled on
mainline PPC systems). I've named this ABI variant elfv1-qpx, have made this
the default ABI when QPX is supported, and have updated the ABI handling code
to provide QPX vectors with the correct stack alignment and associated
register-assignment logic.
llvm-svn: 231960
CloudABI can be identified by the __CloudABI__ preprocessor definition. The
system uses ELF executables.
CloudABI uses Unicode 7.0.0 for the encoding of wchar_t. As Unicode 7.0.0 is
synchronized with ISO/IEC 10646:2012 (released on 2012-06-01),
__STDC_ISO_10646__ is defined as 201206L.
llvm-svn: 231912
MSVC doesn't warn on this. Users are expected to apply the WINAPI macro
to functions passed by pointer to the Win32 API, and this macro expands
to __stdcall. This means we end up with a lot of useless noisy warnings
about ignored calling conventions when compiling code with clang for
Win64.
llvm-svn: 230668
Currently, the NaN values emitted for MIPS architectures do not cover
non-IEEE754-2008 compliant case. This change fixes the issue.
Patch by Vladimir Radosavljevic.
Differential Revision: http://reviews.llvm.org/D7882
llvm-svn: 230653
The patch teaches the clang's driver to understand new MIPS ISA names,
pass appropriate options to the assembler, defines corresponding macros etc
http://reviews.llvm.org/D7737
llvm-svn: 230092
Add some of the missing M and R class Cortex CPUs, namely:
Cortex-M0+ (called Cortex-M0plus for GCC compatibility)
Cortex-M1
SC000
SC300
Cortex-R5
llvm-svn: 229661
For compatibility with GCC (and because it's generally helpful information
otherwise inaccessible to the preprocessor). This appears to be canonically the
alignment of max_align_t (e.g. on i386, __BIGGEST_ALIGNMENT__ is 4 even though
vector types will be given greater alignment).
Patch mostly by Mats Petersson
llvm-svn: 228367
Summary:
It was used for interoperability with PNaCl's calling conventions, but
it's no longer needed.
Also Remove NaCl*ABIInfo which just existed to delegate to either the portable
or native ABIInfo, and remove checkCallingConvention which was now a no-op
override.
Reviewers: jvoung
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D7206
llvm-svn: 227362
The test was fixed after a discussion with the revision author: the check
pattern was made more flexible as the "%call" part is not what we actually want
to check strictly there.
The original patch description:
===
Introduce SPIR calling conventions.
This implements Section 3.7 from the SPIR 1.2 spec:
SPIR kernels should use "spir_kernel" calling convention.
Non-kernel functions use "spir_func" calling convention. All
other calling conventions are disallowed.
The patch works only for OpenCL source. Any other uses will need
to ensure that kernels are assigned the spir_kernel calling
convention correctly.
===
llvm-svn: 226561
This implements Section 3.7 from the SPIR 1.2 spec:
SPIR kernels should use "spir_kernel" calling convention.
Non-kernel functions use "spir_func" calling convention. All
other calling conventions are disallowed.
The patch works only for OpenCL source. Any other uses will need
to ensure that kernels are assigned the spir_kernel calling
convention correctly.
llvm-svn: 226548
even though every basic source character literal has the same numerical value
as a narrow or wide character literal.
It appears that the FreeBSD folks are trying to use this macro to mean
something other than what the relevant standards say it means, but their usage
is conforming, so put up with it.
llvm-svn: 225751
Add additional constraint checking for target specific behaviour for inline
assembly constraints. We would previously silently let all arguments through
for these constraints. In cases where the constraints were violated, we could
end up failing to select instructions and triggering assertions or worse,
silently ignoring instructions.
llvm-svn: 225244
use clang -cc1 matching the front end and backend. Fix up a couple
of tests that were testing aapcs for arm-linux-gnu.
The test that removes the aapcs abi calling convention removes
them because the default triple matches what the backend uses
for the calling convention there and so it doesn't need to be
explicitly stated - see the code in TargetInfo.cpp.
llvm-svn: 224491
Summary:
Because GCC doesn't use $1 for code generation, inline assembly code can use $1 without having to add it to the clobbers list.
LLVM, on the other hand, does not shy away from using $1, and this can cause conflicts with inline assembly which assumes GCC-like code generation.
A solution to this problem is to make Clang automatically clobber $1 for all MIPS inline assembly.
This is not the optimal solution, but it seems like a necessary compromise, for now.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6638
llvm-svn: 224428
basic microarchitecture names, and add support (with tests) for parsing
all of the masic microarchitecture names for CPUs documented to be
accepted by GCC with -march. I didn't go back through the 32-bit-only
old microarchitectures, but this at least brings the recent architecture
names up to speed. This is essentially the follow-up to the LLVM commit
r223769 which did similar cleanups for the LLVM CPUs.
One particular benefit is that you can now use -march=westmere in Clang
and get the LLVM westmere processor which is a different ISA variant (!)
and so quite significant.
Much like with r223769, I would appreciate the Intel folks carefully
thinking about the macros defined, names used, etc for the atom chips
and newest primary x86 chips. The current patterns seem quite strange to
me, especially here in Clang.
Note that I haven't replicated the per-microarchitecture macro defines
provided by GCC. I'm really opposed to source code using these rather
than using ISA feature macros.
llvm-svn: 223776
is for each machine. Fix up darwin tests that were testing for
aapcs on armv7-ios when the actual ABI is apcs.
Should be no user visible change without -cc1.
llvm-svn: 223429
Summary:
Allow CUDA host device functions with two code paths using __CUDA_ARCH__
to differentiate between code path being compiled.
For example:
__host__ __device__ void host_device_function(void) {
#ifdef __CUDA_ARCH__
device_only_function();
#else
host_only_function();
#endif
}
Patch by Jacques Pienaar.
Reviewed By: rnk
Differential Revision: http://reviews.llvm.org/D6457
llvm-svn: 223271
Summary:
In particular, remove the defaults and reorder fields so it matches the result of DataLayout::getStringDescription().
Change by David Neto.
Reviewers: dschuff, sdt
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D6482
llvm-svn: 223140
To support it in the frontend, the following has been added:
- generic address space type attribute;
- documentation for the OpenCL address space attributes;
- parsing of __generic(generic) keyword;
- test code for the parser and diagnostics.
llvm-svn: 222831
Summary:
This resolves [[ http://llvm.org/bugs/show_bug.cgi?id=17391 | PR17391 ]].
GCC's sources were used as a guide (couldn't find much information in ARM documentation).
Reviewers: doug.gregor, asl
Reviewed By: asl
Subscribers: asl, aemerson, cfe-commits
Differential Revision: http://reviews.llvm.org/D6339
llvm-svn: 222741
This option was misleading because it looked like it enabled the
language feature of SEH (__try / __except), when this option was really
controlling which EH personality function to use. Mingw only supports
SEH and SjLj EH on x86_64, so we can simply do away with this flag.
llvm-svn: 221963
Use the bitmask to store the set of enabled sanitizers instead of a
bitfield. On the negative side, it makes syntax for querying the
set of enabled sanitizers a bit more clunky. On the positive side, we
will be able to use SanitizerKind to eventually implement the
new semantics for -fsanitize-recover= flag, that would allow us
to make some sanitizers recoverable, and some non-recoverable.
No functionality change.
llvm-svn: 221558
This CPU definition is redundant. The Cortex-A9 is defined as
supporting multiprocessing extensions. Remove references to this CPU.
This CPU was recently removed from LLVM. See http://reviews.llvm.org/D6057
Change-Id: I62ae7cc656fcae54fbaefc4b6976e77e694a8678
llvm-svn: 221458
This patch simplifies how default target features are set for AMD bdver2
and bdver1. In particular, method 'getDefaultFeatures' now implements a
fallthrough from case 'CK_BDVER2' to case 'CK_BDVER1'.
That is because 'bdver2' has the same features available in bdver1 plus
BMI, FMA, F16C and TBM.
This patch also adds missing checks for predefined macros in test
predefined-arch-macros.c. In the case of BTVER2, the test now also checks
for F16C, BMI and PCLMUL. In the case of BDVER3 and BDVER4, the test now
also checks for the presence of FSGSBASE.
Differential Revision: http://reviews.llvm.org/D6134
llvm-svn: 221449
The most complex aspect of the convention is the handling of homogeneous
vector and floating point aggregates. Reuse the homogeneous aggregate
classification code that we use on PPC64 and ARM for this.
This convention also has a C mangling, and we apparently implement that
in both Clang and LLVM.
Reviewed By: majnemer
Differential Revision: http://reviews.llvm.org/D6063
llvm-svn: 221006
Now that we have initial support for VSX, we can begin adding
intrinsics for programmer access to VSX instructions. This patch
performs the necessary enablement in the front end, and tests it by
implementing intrinsics for minimum and maximum using the vector
double data type.
The main change in the front end is to no longer disallow "vector" and
"double" in the same declaration (lib/Sema/DeclSpec.cpp), but "vector"
and "long double" must still be disallowed. The new intrinsics are
accessed via vec_max and vec_min with changes in
lib/Headers/altivec.h. Note that for v4f32, we already access
corresponding VMX builtins, but with VSX enabled we should use the
forms that allow all 64 vector registers.
The new built-ins are defined in include/clang/Basic/BuiltinsPPC.def.
I've added a new test in test/CodeGen/builtins-ppc-vsx.c that is
similar to, but much smaller than, builtins-ppc-altivec.c. This
allows us to test VSX IR generation without duplicating CHECK lines
for the existing bazillion Altivec tests.
Since vector double is now legal when VSX is available, I've modified
the error message, and changed where we test for it and for vector
long double, since the target machine isn't visible in the old place.
This serendipitously removed a not-pertinent warning about 'long'
being deprecated when used with 'vector', when "vector long double" is
encountered and we just want to issue an error. The existing tests
test/Parser/altivec.c and test/Parser/cxx-altivec.cpp have been
updated accordingly, and I've added test/Parser/vsx.c to verify that
"vector double" is now legitimate with VSX enabled.
There is a companion patch for LLVM.
llvm-svn: 220989
Wire it through everywhere we have support for fastcall, essentially.
This allows us to parse the MSVC "14" CTP headers, but we will
miscompile them because LLVM doesn't support __vectorcall yet.
Reviewed By: Aaron Ballman
Differential Revision: http://reviews.llvm.org/D5808
llvm-svn: 220573
This is long-since overdue, and matches GCC 5.0. This should also be
backwards-compatible, because we already supported all of C11 as an extension
in C99 mode.
llvm-svn: 220244
Thumb1 has legitimate reasons for preferring 32-bit alignment of types
i1/i8/i16, since the 16-bit encoding of "add rD, sp, #imm" requires #imm to be
a multiple of 4. However, this is a trade-off betweem code size and RAM usage;
the DataLayout string is not the best place to represent it even if desired.
So this patch removes the extra Thumb requirements, hopefully making ARM and
Thumb completely compatible in this respect.
llvm-svn: 219735
Before, ARM and Thumb mode code had different preferred alignments, which could
lead to some rather unexpected results. There's justification for reducing it
from the default 64-bits (wasted space), but I don't think there is for going
below 32-bits.
There's no actual ABI change here, just to reassure people.
llvm-svn: 219720
The current VSX feature for PowerPC specifies availability of the VSX
instructions added with the 2.06 architecture version. With 2.07, the
architecture adds new instructions to both the Category:Vector and
Category:VSX instruction sets. Additionally, unaligned vector storage
operations have improved performance.
This patch adds a feature to provide access to the new instructions
and performance capabilities of Power8. For compatibility with GCC,
the feature is controlled via a new -mpower8-vector switch, and the
feature causes the __POWER8_VECTOR__ builtin define to be generated by
the preprocessor.
There is a companion patch for llvm being committed at the same time.
llvm-svn: 219502
The Cortex-M7 has 3 options for its FPU: none, FPv5-SP-D16 and
FPv5-DP-D16. FPv5 has the same instructions as FP-ARMv8, so it can be
modeled using the same target feature, and all double-precision
operations are already disabled by the fp-only-sp target features.
llvm-svn: 218748