Most 64-bit targets define int64_t as long int, and AArch64 should
make same definition to follow LP64 model. In GNU tool chain, int64_t
is defined as long int for 64-bit target. So to get consistent with GNU,
it's better Changing int64_t from 'long long int' to 'long int',
otherwise clang will get different name mangling suffix compared with g++.
llvm-svn: 202004
This fixes one immediate bug where an expression with side-effects
could be emitted twice during a NEON call.
It also prepares the way for folding CodeGen for many of the SISD
intrinsics into a table, reducing code size and hopefully increasing
performance eventually ("binary search + few switch cases" should be
better than "lots of switch cases").
llvm-svn: 201667
We used to have special handling for isCrypto and isA64 bits in the
NeonEmitter.cpp file (it knew the former was predicated on __ARM_FEATURE_CRYPTO
and the latter on __aarch64__ and went through various contortions to make sure
the correct intrinsics were emitted under the correct guard.
This is ugly and has obvious scalability problems (e.g. vcvtX intrinsics are
needed, which are ARMv8 only but available on both, yet another category). This
patch moves the #if predicate into the arm_neon.td file directly and makes
NeonEmitter.cpp agnostic about what goes in there.
It also deduplicates arm_neon.td so that each desired intrinsic is mentioned in
just one place (necessary because of the new mechanism for creating
arm_neon.h).
rdar://problem/16035743
llvm-svn: 201660
There are two kinds of automatically generated tests for NEON intrinsics, both
of which can be merged without adversely affecting users.
1. We check that a valid kind of __builtin_neon_XYZ overload is requested (e.g.
we're not asking for a float32x4_t version when it only accepts integers. Since
the __builtin_neon_XYZ intrinsics should only be used in arm_neon.h, relaxing
this test and permitting AArch64 types for AArch32 should not cause a problem.
The extra arm_neon.h definitions should be #ifdefed out anyway.
2. We check that intrinsics which take immediates are actually given
compile-time constants within range. Since all NEON intrinsics should be
backwards compatible, these tests should be identical on AArch64 and AArch32
anyway.
This patch, therefore, merges the separate AArch64 and 32-bit checks.
rdar://problem/16035743
llvm-svn: 201659
Previously, range checking on the __builtin_neon_XYZ_v Clang intrinsics didn't
take account of the type actually passed to the call, which meant a request
like "vext_s16(a, b, 7)" was allowed through (TableGen was conservative and
allowed 0-7 for all types). This caused an assert in the backend because the
lane doesn't make sense.
llvm-svn: 201232
This is a duplicate implementation.
E.g. this patch defines:
float64_t vabd_f64(float64_t a, float64_t b)
But there is already a similar intrinsic "vabdd_f64" with the same types.
Also, this intrinsic will be conflicted to the vector type intrinsic as following(Which is implemented by me and will be committed to trunk):
float64x1_t vabd_f64(float64x1_t a, float64x1_t b).
Two functions shouldn't have a same name in arm_neon.h.
According to ARM ACLE document, such vabd_f64 with float64_t is not existing.
So I revert this commit.
llvm-svn: 196205
There seem to be quite a few references to the old macro __ARM_NEON__ on the
internet, so I don't think it's a good idea to remove it entirely (at least
yet), but the canonical name does not have the trailing underscores so we
should use that ourselves.
llvm-svn: 195353
Patch by Ana Pazos
- Completed implementation of instruction formats:
AdvSIMD three same
AdvSIMD modified immediate
AdvSIMD scalar pairwise
- Completed implementation of instruction classes
(some of the instructions in these classes
belong to yet unfinished instruction formats):
Vector Arithmetic
Vector Immediate
Vector Pairwise Arithmetic
- Initial implementation of instruction formats:
AdvSIMD scalar two-reg misc
AdvSIMD scalar three same
- Intial implementation of instruction class:
Scalar Arithmetic
- Initial clang changes to support arm v8 intrinsics.
Note: no clang changes for scalar intrinsics function name mangling yet.
- Comprehensive test cases for added instructions
To verify auto codegen, encoding, decoding, diagnosis, intrinsics.
llvm-svn: 187568
This will prevent the tests from running on normal make check. You will need to
actually pass in --param run_long_tests=true to LIT in order to run these.
llvm-svn: 184784
These intrinsics use the __builtin_shuffle() function to extract the
low and high half, respectively, of a 128-bit NEON vector. Currently,
they're defined to use bitcasts to simplify the emitter, so we get code
like:
uint16x4_t vget_low_u32(uint16x8_t __a) {
return (uint32x2_t) __builtin_shufflevector((int64x2_t) __a,
(int64x2_t) __a,
0);
}
While this works, it results in those bitcasts going all the way through
to the IR, resulting in code like:
%1 = bitcast <8 x i16> %in to <2 x i64>
%2 = shufflevector <2 x i64> %1, <2 x i64> undef, <1 x i32>
%zeroinitializer
%3 = bitcast <1 x i64> %2 to <4 x i16>
We can instead easily perform the operation directly on the input vector
like:
uint16x4_t vget_low_u16(uint16x8_t __a) {
return __builtin_shufflevector(__a, __a, 0, 1, 2, 3);
}
Not only is that much easier to read on its own, it also results in
cleaner IR like:
%1 = shufflevector <8 x i16> %in, <8 x i16> undef,
<4 x i32> <i32 0, i32 1, i32 2, i32 3>
This is both easier to read and easier for the back end to reason
about effectively since the operation is obfuscating the source with
bitcasts.
rdar://13894163
llvm-svn: 181865