Summary:
MSDN says that fastcall, stdcall, thiscall, and vectorcall are all
accepted but ignored on ARM and X64.
https://msdn.microsoft.com/en-us/library/984x0h58.aspx
MSDN also says cdecl is also accepted and typically ignored
This patch brings ARM in line with how we ignore them for X64
Reviewers: rnk
Subscribers: compnerd, cfe-commits
Differential Revision: http://reviews.llvm.org/D12034
llvm-svn: 245076
... and add aarch32 to specifically refer to the 32-bit ones.
Previously, 'arm' meant only 32-bit architectures and there was no way
for a module to build with both 32 and 64 bit ARM architectures.
Now a module that is intended to work on both architectures can specify
requires arm
whereas a module only for 32-bit platforms can say
requires aarch32
and just like before, 64-bit only can say
requires aarch64
llvm-svn: 244306
so that we can populate it on a per-target basis with required features.
Future commits will start using this information for warnings.
llvm-svn: 244286
The z13 vector facility has an associated language extension,
closely modeled on AltiVec/VSX. The main differences are:
- vector long, vector float and vector pixel are not supported
- vector long long and vector double are supported (like VSX)
- comparison operators return a vector rather than a scalar integer
- shift operators behave like the OpenCL shift operators
- vector bool is only supported as argument to certain operators;
some operators allow mixing a bool with a non-bool vector
This patch adds clang support for the extension. It is closely modelled
on the AltiVec support. Similarly to the -faltivec option, there's a
new -fzvector option to enable the extensions (as well as an -mzvector
alias for compatibility with GCC). There's also a separate LangOpt.
The extension as implemented here is intended to be compatible with
the -mzvector extension recently implemented by GCC.
Based on a patch by Richard Sandiford.
Differential Revision: http://reviews.llvm.org/D11001
llvm-svn: 243642
We ended up with the wrong predefine after the recent TargetParser shuffle, and
I accidentally solidified it with a test. This should fix it.
llvm-svn: 242841
The "armv7-windows", "i686-windows", and "x86_64-windows" targets should be
equivalent to the MSVC environment. This was previously discussed when the
triples for Windows werw canonicalised. Im not sure how this was overlooked.
This fixes the emission of non-COFF formats on Windows.
Thanks to ki9a for reporting this issue over IRC!
llvm-svn: 242574
for extracting target specific information.
-Patches commit r241343: case 'armv7l' was unhandled in
ARMTargetInfo::getCPUAttr(), and thus it was returning invalid
characters for macro definition.
Change-Id: I1a0972e5ff5529cd17376c6562047bab8b4da32c
Phabricator: http://reviews.llvm.org/D10839
llvm-svn: 242514
MSVC 4.2 didn't have bool as a builtin type but MSVC 5.0 does. When
they added it, they added a macro (__BOOL_DEFINED) which allows build
scripts and the like to know if they should provide their own bool.
Clang always supports bool as a builtin type in C++ mode.
llvm-svn: 242307
can be different from the normal variable maximum.
Add an error diagnostic for when TLS variables exceed maximum TLS alignment.
Currenty only PS4 sets an explicit maximum TLS alignment.
Patch by Charles Li!
llvm-svn: 242198
This patch corresponds to review:
http://reviews.llvm.org/D10972
Fix for the handling of dependent features that are enabled by default
on some CPU's (such as -mvsx, -mpower8-vector).
Also provides a number of new interfaces or fixes existing ones in
altivec.h.
Changed signatures to conform to ABI:
vector short vec_perm(vector signed short, vector signed short, vector unsigned char)
vector int vec_perm(vector signed int, vector signed int, vector unsigned char)
vector long long vec_perm(vector signed long long, vector signed long long, vector unsigned char)
vector signed char vec_sld(vector signed char, vector signed char, const int)
vector unsigned char vec_sld(vector unsigned char, vector unsigned char, const int)
vector bool char vec_sld(vector bool char, vector bool char, const int)
vector unsigned short vec_sld(vector unsigned short, vector unsigned short, const int)
vector signed short vec_sld(vector signed short, vector signed short, const int)
vector signed int vec_sld(vector signed int, vector signed int, const int)
vector unsigned int vec_sld(vector unsigned int, vector unsigned int, const int)
vector float vec_sld(vector float, vector float, const int)
vector signed char vec_splat(vector signed char, const int)
vector unsigned char vec_splat(vector unsigned char, const int)
vector bool char vec_splat(vector bool char, const int)
vector signed short vec_splat(vector signed short, const int)
vector unsigned short vec_splat(vector unsigned short, const int)
vector bool short vec_splat(vector bool short, const int)
vector pixel vec_splat(vector pixel, const int)
vector signed int vec_splat(vector signed int, const int)
vector unsigned int vec_splat(vector unsigned int, const int)
vector bool int vec_splat(vector bool int, const int)
vector float vec_splat(vector float, const int)
Added a VSX path to:
vector float vec_round(vector float)
Added interfaces:
vector signed char vec_eqv(vector signed char, vector signed char)
vector signed char vec_eqv(vector bool char, vector signed char)
vector signed char vec_eqv(vector signed char, vector bool char)
vector unsigned char vec_eqv(vector unsigned char, vector unsigned char)
vector unsigned char vec_eqv(vector bool char, vector unsigned char)
vector unsigned char vec_eqv(vector unsigned char, vector bool char)
vector signed short vec_eqv(vector signed short, vector signed short)
vector signed short vec_eqv(vector bool short, vector signed short)
vector signed short vec_eqv(vector signed short, vector bool short)
vector unsigned short vec_eqv(vector unsigned short, vector unsigned short)
vector unsigned short vec_eqv(vector bool short, vector unsigned short)
vector unsigned short vec_eqv(vector unsigned short, vector bool short)
vector signed int vec_eqv(vector signed int, vector signed int)
vector signed int vec_eqv(vector bool int, vector signed int)
vector signed int vec_eqv(vector signed int, vector bool int)
vector unsigned int vec_eqv(vector unsigned int, vector unsigned int)
vector unsigned int vec_eqv(vector bool int, vector unsigned int)
vector unsigned int vec_eqv(vector unsigned int, vector bool int)
vector signed long long vec_eqv(vector signed long long, vector signed long long)
vector signed long long vec_eqv(vector bool long long, vector signed long long)
vector signed long long vec_eqv(vector signed long long, vector bool long long)
vector unsigned long long vec_eqv(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_eqv(vector bool long long, vector unsigned long long)
vector unsigned long long vec_eqv(vector unsigned long long, vector bool long long)
vector float vec_eqv(vector float, vector float)
vector float vec_eqv(vector bool int, vector float)
vector float vec_eqv(vector float, vector bool int)
vector double vec_eqv(vector double, vector double)
vector double vec_eqv(vector bool long long, vector double)
vector double vec_eqv(vector double, vector bool long long)
vector bool long long vec_perm(vector bool long long, vector bool long long, vector unsigned char)
vector double vec_round(vector double)
vector double vec_splat(vector double, const int)
vector bool long long vec_splat(vector bool long long, const int)
vector signed long long vec_splat(vector signed long long, const int)
vector unsigned long long vec_splat(vector unsigned long long,
vector bool int vec_sld(vector bool int, vector bool int, const int)
vector bool short vec_sld(vector bool short, vector bool short, const int)
llvm-svn: 241904
For Mips direct-to-nacl, the goal is to be close to le32 front-end and
use Mips32EL backend. This patch defines new NaClMips32ELTargetInfo and
modifies it slightly to be close to le32. It also adds necessary parts,
inline with ARM and X86.
Differential Revision: http://reviews.llvm.org/D10739
llvm-svn: 241678
for extracting target specific information.
- Patch for commit 241267: ShouldUseInlineAtomic was set incorrectly when subArch was
not specified, causing regressions.
Change-Id: Iabb35d59722f4972f1a3ab4365880add5bbcfdcc
llvm-svn: 241343
This reinstates part of the hack removed in r233223, by special
casing sse4 as part of the feature additions. The notable change
here is that we consider it only as part of setting the SSE level
and not as part of the actual target features set which handles
setting the rest of the masks.
llvm-svn: 241130
This matches the implementation of the gcc support for the same
feature, including checking the values set up by libgcc at runtime.
The structure looks like this:
unsigned int __cpu_vendor;
unsigned int __cpu_type;
unsigned int __cpu_subtype;
unsigned int __cpu_features[1];
with a set of enums to match various fields that are field out after
parsing the output of the cpuid instruction.
This also adds a set of errors checking for valid input (and cpu).
compiler-rt support for this and the other builtins in this family
(__builtin_cpu_init and __builtin_cpu_is) are forthcoming.
llvm-svn: 240994
when iterating through the Features vector if we don't
keep track of what's already been set. This could lead to
the macro __ARM_FP getting the wrong value. This patch
fixes this issue by keeping track of the bits that have
already been set in the loop.
Differential Revision: http://reviews.llvm.org/D10395
llvm-svn: 240607
As specified in the SysV AVX512 ABI drafts. It follows the same scheme
as AVX2:
Arguments of type __m512 are split into eight eightbyte chunks.
The least significant one belongs to class SSE and all the others
to class SSEUP.
This also means we change the OpenMP SIMD default alignment on AVX512.
Based on r240337.
Differential Revision: http://reviews.llvm.org/D9894
llvm-svn: 240338
In r239421, the mangling of long double on PowerPC Linux targets
was changed to use "g" instead of "e". This same change also needs
to be done for SystemZ (all targets, since we support only Linux
on SystemZ anyway).
This is because an old ABI variant set "long double" to a 64-bit
type equivalent to "double", and the "e" mangling code is still
used to refer to that old ABI for compatibility reasons.
llvm-svn: 239822
Some people want to experiment with building i686 CloudABI binaries. I
am not entirely sure this is a good idea, as I'd rather see Intel x32
support appear.
As it only requires a two-line change, let's at least provide compiler
to ease experimenting.
llvm-svn: 239689
GCC mangles long double like __float128 in order to support
compatibility with ABI variants which had a different interpretation of
long double.
This fixes PR23791.
llvm-svn: 239421
They should be 'int' instead of 'long int' everywhere else except
NetBSD too, from what I gather in GCC's spec files. So, optimistically
changing it for everyone else, too.
llvm-svn: 239046
Cygwin (and MinGW) targets define __declspec to __attribute__ unless
-fms-extensions is specified. It turns out that cygwin headers rely on
the existence of this macro.
llvm-svn: 238394
Avoiding ugly combination of string parsing in the front-end. We still
need to move away from CPU parsing at all, but that's for a different
commit.
llvm-svn: 238318
This patch adds support for the following new instructions in the
Power ISA 2.07:
vpksdss
vpksdus
vpkudus
vpkudum
vupkhsw
vupklsw
These instructions are available through the vec_packs, vec_packsu,
vec_unpackh, and vec_unpackl built-in interfaces. These are
lane-sensitive instructions, so the built-ins have different
implementations for big- and little-endian, and the instructions must
be marked as killing the vector swap optimization for now.
The first three instructions perform saturating pack operations. The
fourth performs a modulo pack operation, which means it can be
represented with a vector shuffle, and conversely the appropriate
vector shuffles may cause this instruction to be generated. The other
instructions are only generated via built-in support for now.
I noticed during patch preparation that the macro __VSX__ was not
previously predefined when the power8-vector or direct-move features
are requested. This is an error, and I've corrected that here as
well.
Appropriate tests have been added.
There is a companion patch to llvm for the rest of this support.
llvm-svn: 237500
Follow-up to commit for revision 236848.
Just a test case for the macro definition under the right CPU/Arch.
One combination was actually missed in the initial fix:
- powerpc64-unknown-unknown -mcpu=pwr8 (rather than -mcpu=power8).
llvm-svn: 237386
This patch adds support for the z13 architecture type. For compatibility
with GCC, a pair of options -mvx / -mno-vx can be used to selectively
enable/disable use of the vector facility.
When the vector facility is present, we default to the new vector ABI.
This is characterized by two major differences:
- Vector types are passed/returned in vector registers
(except for unnamed arguments of a variable-argument list function).
- Vector types are at most 8-byte aligned.
The reason for the choice of 8-byte vector alignment is that the hardware
is able to efficiently load vectors at 8-byte alignment, and the ABI only
guarantees 8-byte alignment of the stack pointer, so requiring any higher
alignment for vectors would require dynamic stack re-alignment code.
However, for compatibility with old code that may use vector types, when
*not* using the vector facility, the old alignment rules (vector types
are naturally aligned) remain in use.
These alignment rules are not only implemented at the C language level,
but also at the LLVM IR level. This is done by selecting a different
DataLayout string depending on whether the vector ABI is in effect or not.
Based on a patch by Richard Sandiford.
llvm-svn: 236531
Cyclone actually supports all the goodies you'd expect to come with an AArch64
CPU, so it doesn't need its own clause. Also we should probably be testing
these clauses.
llvm-svn: 236349
by erasing the soft-float target feature if the rest of the front
end added it because of defaults or the soft float option.
Add some testing for some of the targets that implement this hack.
llvm-svn: 236179
This issue was fixed elsewhere in r235396 in a more general way, hence these
changes no longer do anything. Keep the testcase however, to ensure that we
don't regress this for ARM.
llvm-svn: 236104
When creating a global variable with a type of a struct with bitfields, we must
forcibly set the alignment of the global from the RecordDecl. We must do this so
that the proper bitfield alignment makes its way down to LLVM, since clang will
mangle the bitfields into one large type.
llvm-svn: 235976
The GCC construct __attribute__((aligned)) is defined to set alignment
to "the default alignment for the target architecture" according to
the GCC documentation:
The default alignment is sufficient for all scalar types, but may not be
enough for all vector types on a target that supports vector operations.
The default alignment is fixed for a particular target ABI.
clang currently hard-coded an alignment of 16 bytes for that construct,
which is correct on some platforms (including X86), but wrong on others
(including SystemZ). Since this value is ABI-relevant, it is important
to get correct for compatibility purposes.
This patch adds a new TargetInfo member "DefaultAlignForAttributeAligned"
that targets can set to the appropriate default __attribute__((aligned))
value.
Note that I'm deliberately *not* using the existing "SuitableAlign"
value, which is used to set the pre-defined macro __BIGGEST_ALIGNMENT__,
since those two values may not be the same on all platforms. In fact,
on X86, __attribute__((aligned)) always uses 16-byte alignment, while
__BIGGEST_ALIGNMENT__ may be larger if AVX-2 or AVX-512 are supported.
(This is actually not yet correctly implemented in clang either.)
The patch provides a value for DefaultAlignForAttributeAligned only for
SystemZ, and leaves the default for all other targets at 16, which means
no visible change in behavior on all other targets. (The value is still
wrong for some other targets, but I'd prefer to leave it to the target
maintainers for those platforms to fix.)
llvm-svn: 235397
This patch corresponds to review:
http://reviews.llvm.org/D8930
This just adds a front end option to let the back end know the target has PPC
direct move instructions.
llvm-svn: 234683
This patch corresponds to review:
http://reviews.llvm.org/D8398
It adds some builtin functions to access the extended divide and bit permute instructions.
llvm-svn: 234547
Adds ARM Cortex-R4 and R4F support and tests in Clang. Though Cortex-R4
support was present, the support for hwdiv in thumb-mode was not defined
or tested properly. This has also been added.
llvm-svn: 234488
Do the same thing as win64. If we're not using COFF, use the ELF
manglings. Maybe if we are targetting *-windows-msvc-macho, we should
use darwin manglings, but I don't need to stir that pot today.
llvm-svn: 233819
This should fix build-bot failures after r233804.
The patch also adds a "systemz" feature, and renames the
"transactional-execution" feature to "htm", since it turns
out "-" is not a legal character in module feature names.
llvm-svn: 233807
The zEC12 provides the transactional-execution facility. This is exposed
to users via a set of builtin routines on other compilers. This patch
adds clang support to enable those builtins. In partciular, the patch:
- enables the transactional-execution feature by default on zEC12
- allows to override presence of that feature via the -mhtm/-mno-htm options
- adds a predefined macro __HTM__ if the feature is enabled
- adds support for the transactional-execution GCC builtins
- adds Sema checking to verify the __builtin_tabort abort code
- adds the s390intrin.h header file (for GCC compatibility)
- adds s390 sections to the htmintrin.h and htmxlintrin.h header files
Since this is first use of target-specific intrinsics on the platform,
the patch creates the include/clang/Basic/BuiltinsSystemZ.def file and
hooks it up in TargetBuiltins.h and lib/Basic/Targets.cpp.
An associated LLVM patch adds the required LLVM IR intrinsics.
For reference, the transactional-execution instructions are documented
in the z/Architecture Principles of Operation for the zEC12:
http://publibfp.boulder.ibm.com/cgi-bin/bookmgr/download/DZ9ZR009.pdf
The associated builtins are documented in the GCC manual:
http://gcc.gnu.org/onlinedocs/gcc/S_002f390-System-z-Built-in-Functions.html
The htmxlintrin.h intrinsics provided for compatibility with the IBM XL
compiler are documented in the "z/OS XL C/C++ Programming Guide".
llvm-svn: 233804
Add Tool and ToolChain support for clang to target the NaCl OS using the NaCl
SDK for x86-32, x86-64 and ARM.
Includes nacltools::Assemble and Link which are derived from gnutools. They
are similar to Linux but different enought that they warrant their own class.
Also includes a NaCl_TC in ToolChains derived from Generic_ELF with library
and include paths suitable for an SDK and independent of the system tools.
Differential Revision: http://reviews.llvm.org/D8590
llvm-svn: 233594
Like on other 64-bit platforms, Int64Type should be SignedLong
on SystemZ, not SignedLongLong as per default. This could cause
ABI incompatibilities in certain cases (e.g. name mangling).
llvm-svn: 233544
they enable/disable.
This fixes two things:
a) sse4 isn't actually a target feature, don't treat it as one.
b) we weren't correctly disabling sse4.1 when we'd pass -mno-sse4
after enabling it, thus passing preprocessor directives and
(soon) passing the function attribute as well when we shouldn't.
llvm-svn: 233223
This patch adds Hardware Transaction Memory (HTM) support supported by ISA 2.07
(POWER8). The intrinsic support is based on GCC one [1], with both 'PowerPC HTM
Low Level Built-in Functions' and 'PowerPC HTM High Level Inline Functions'
implemented.
Along with builtins a new driver switch is added to enable/disable HTM
instruction support (-mhtm) and a header with common definitions (mostly to
parse the TFHAR register value). The HTM switch also sets a preprocessor builtin
HTM.
The HTM usage requires a recently newer kernel with PPC HTM enabled. Tested on
powerpc64 and powerpc64le.
This is send along a llvm patch to enabled the builtins and option switch.
[1]
https://gcc.gnu.org/onlinedocs/gcc/PowerPC-Hardware-Transactional-Memory-Built-in-Functions.html
Phabricator Review: http://reviews.llvm.org/D8248
llvm-svn: 233205
On android x86_32 the long double is only 64 bits (compared to 80 bits
on linux x86_32) and on android x86_64 the long double is IEEEquad
(compared to x87DoubleExtended on linux x86_64). This CL creates new
TargetInfo classes for this targets to represent these differences.
Differential revision: http://reviews.llvm.org/D8357
llvm-svn: 233177
ARMv6K is another layer between ARMV6 and ARMV6T2. This is the Clang
side of the changes.
ARMV6 family LLVM implementation.
+-------------------------------------+
| ARMV6 |
+----------------+--------------------+
| ARMV6M (thumb) | ARMV6K (arm,thumb) | <- From ARMV6K and ARMV6M processors
+----------------+--------------------+ have support for hint instructions
| ARMV6T2 (arm,thumb,thumb2) | (SEV/WFE/WFI/NOP/YIELD). They can
+-------------------------------------+ be either real or default to NOP.
| ARMV7 (arm,thumb,thumb2) | The two processors also use
+-------------------------------------+ different encoding for them.
Patch by Vinicius Tinti.
llvm-svn: 232469