Commit Graph

960 Commits

Author SHA1 Message Date
Reid Kleckner 8a8c129a4b Do the same IRgen for __builtin_pow* as for pow*
There's no reason for these to be different.

llvm-svn: 228240
2015-02-05 00:18:01 +00:00
Reid Kleckner aca01db706 Implement IRGen for SEH __finally and AbnormalTermination
Previously we would simply double-emit the body of the __finally block,
but that doesn't work when it contains any kind of Decl, which we can't
double emit.

This fixes that by emitting the block once and branching into a shared
code region and then branching back out.

llvm-svn: 228222
2015-02-04 22:37:07 +00:00
David Majnemer 310e3a8f60 MS ABI: Implement proper support for setjmp
On targets which use the MSVCRT, setjmp is a macro which expands to
_setjmp or _setjmpex.

_setjmp and _setjmpex have a secret, hidden argument which is not listed
in the function prototype on X64 and WoA.  This hidden argument always
seems to be the frame pointer.

_setjmpex isn't used on X86, _setjmp is magically replaced with a call
to _setjmp3.  The second argument is zero for 'normal' setjmp/longjmp
pairs, otherwise it is a count of additional variadic arguments.  This
is used when setjmp appears inside of a try or __try.

It is not safe to use a pointer to setjmp because _setjmp, _setjmpex and
_setmp3 are not compatible with setjmp.

llvm-svn: 227426
2015-01-29 09:29:21 +00:00
Pete Cooper f051cbf631 Don't generate llvm.expect intrinsics with -O0.
The backend won't run LowerExpect on -O0.  In a debug LTO build, this results in llvm.expect intrinsics being in the LTO IR which doesn't know how to optimize them.

Thanks to Chandler for the suggestion and review.

Differential revision: http://reviews.llvm.org/D7183

llvm-svn: 227135
2015-01-26 20:51:58 +00:00
Reid Kleckner 1d59f99f5c Initial support for Win64 SEH IR emission
The lowering looks a lot like normal EH lowering, with the exception
that the exceptions are caught by executing filter expression code
instead of matching typeinfo globals. The filter expressions are
outlined into functions which are used in landingpad clauses where
typeinfo would normally go.

Major aspects that still need work:
- Non-call exceptions in __try bodies won't work yet. The plan is to
  outline the __try block in the frontend to keep things simple.
- Filter expressions cannot use local variables until capturing is
  implemented.
- __finally blocks will not run after exceptions. Fixing this requires
  work in the LLVM SEH preparation pass.

The IR lowering looks like this:

// C code:
bool safe_div(int n, int d, int *r) {
  __try {
    *r = normal_div(n, d);
  } __except(_exception_code() == EXCEPTION_INT_DIVIDE_BY_ZERO) {
    return false;
  }
  return true;
}

; LLVM IR:
define i32 @filter(i8* %e, i8* %fp) {
  %ehptrs = bitcast i8* %e to i32**
  %ehrec = load i32** %ehptrs
  %code = load i32* %ehrec
  %matches = icmp eq i32 %code, i32 u0xC0000094
  %matches.i32 = zext i1 %matches to i32
  ret i32 %matches.i32
}

define i1 zeroext @safe_div(i32 %n, i32 %d, i32* %r) {
  %rr = invoke i32 @normal_div(i32 %n, i32 %d)
      to label %normal unwind to label %lpad

normal:
  store i32 %rr, i32* %r
  ret i1 1

lpad:
  %ehvals = landingpad {i8*, i32} personality i32 (...)* @__C_specific_handler
      catch i8* bitcast (i32 (i8*, i8*)* @filter to i8*)
  %ehptr = extractvalue {i8*, i32} %ehvals, i32 0
  %sel = extractvalue {i8*, i32} %ehvals, i32 1
  %filter_sel = call i32 @llvm.eh.seh.typeid.for(i8* bitcast (i32 (i8*, i8*)* @filter to i8*))
  %matches = icmp eq i32 %sel, %filter_sel
  br i1 %matches, label %eh.except, label %eh.resume

eh.except:
  ret i1 false

eh.resume:
  resume
}

Reviewers: rjmccall, rsmith, majnemer

Differential Revision: http://reviews.llvm.org/D5607

llvm-svn: 226760
2015-01-22 01:36:17 +00:00
Matt Arsenault 6365ffea3e Add __builtin_amdgpu_class
llvm-svn: 225314
2015-01-06 23:14:57 +00:00
Tom Stellard d8e38a3206 R600: Handle amdgcn triple
For now there is no difference between amdgcn and r600.

llvm-svn: 225294
2015-01-06 20:34:47 +00:00
Craig Topper 2094d8fe88 [x86] Add the (v)cmpps/pd/ss/sd builtins to match gcc. Use them in the sse intrinsic files.
This still lower to the same intrinsics as before.

This is preparation for bounds checking the immediate on the avx version of the builtin so we don't pass illegal immediates into the backend. Since SSE uses a smaller size immediate its not possible to bounds check when using a shared builtin. Rather than creating a clang specific builtin for the different immediate, I decided (after consulting with Chandler) that it was better to match gcc.

llvm-svn: 224879
2014-12-27 06:59:57 +00:00
Saleem Abdulrasool 86b881c63e CodeGen: implement __emit intrinsic
For MSVC compatibility, add the `__emit' builtin. This is used in the Windows
SDK headers, and must therefore be implemented as a builtin rather than an
intrinsic.

The `__emit' builtin provides a mechanism to emit a 16-bit opcode instruction
into the stream. The value must be a compile time constant expression. No
guarantees are made about the CPU and memory states after the execution of the
instruction.

Due to the unchecked nature of the builtin, only support this on Windows on ARM.

llvm-svn: 224438
2014-12-17 17:52:30 +00:00
Peter Collingbourne f770683f14 Implement the __builtin_call_with_static_chain GNU extension.
The extension has the following syntax:

  __builtin_call_with_static_chain(Call, Chain)
  where Call must be a function call expression and Chain must be of pointer type

This extension performs a function call Call with a static chain pointer
Chain passed to the callee in a designated register. This is useful for
calling foreign language functions whose ABI uses static chain pointers
(e.g. to implement closures).

Differential Revision: http://reviews.llvm.org/D6332

llvm-svn: 224167
2014-12-12 23:41:25 +00:00
Duncan P. N. Exon Smith fb49491477 IR: Update clang for Metadata/Value split in r223802
Match LLVM API changes from r223802.

llvm-svn: 223803
2014-12-09 18:39:32 +00:00
Saleem Abdulrasool a14ac3f437 CodeGen: refactor ARM builtin handling
Create a helper function to construct a value for the ARM hint intrinsic
rather than inling the construction.  In order to avoid the use of the sentinel
value, inline the use of intrinsic instruction retrieval.  NFC.

llvm-svn: 223338
2014-12-04 04:52:37 +00:00
Reid Kleckner ee7cf84c8f Use nullptr to silence -Wsentinel when self-hosting on Windows
Richard rejected my Sema change to interpret an integer literal zero in
a varargs context as a null pointer, so -Wsentinel sees an integer
literal zero and fires off a warning. Only CodeGen currently knows that
it promotes integer literal zeroes in this context to pointer size on
Windows.  I didn't want to teach -Wsentinel about that compatibility
hack. Therefore, I'm migrating to C++11 nullptr.

llvm-svn: 223079
2014-12-01 22:02:27 +00:00
Bill Schmidt 9ec8cea02b [PowerPC] Add vec_vsx_ld and vec_vsx_st intrinsics
This patch enables the vec_vsx_ld and vec_vsx_st intrinsics for
PowerPC, which provide programmer access to the lxvd2x, lxvw4x,
stxvd2x, and stxvw4x instructions.

New code in altivec.h defines these in terms of new builtins, which
are themselves defined in BuiltinsPPC.def.  The builtins are converted
to LLVM intrinsics in CGBuiltin.cpp.  Additional code is added to
builtins-ppc-vsx.c to verify the correct generation of the intrinsics.

Note that I moved the other VSX builtins so all VSX builtins will be
alphabetical in their own section in BuiltinsPPC.def.

There is a companion patch for LLVM.

llvm-svn: 221768
2014-11-12 04:19:56 +00:00
Alexey Samsonov e396bfc064 Bundle conditions checked by UBSan with sanitizer kinds they implement.
Summary:
This change makes CodeGenFunction::EmitCheck() take several
conditions that needs to be checked (all of them need to be true),
together with sanitizer kinds these checks are for. This would allow
to split one call into UBSan runtime into several calls in case
different sanitizer kinds would have different recoverability
settings.

Tests should be fixed accordingly, I'm working on it.

Test Plan: regression test suite.

Reviewers: rsmith

Reviewed By: rsmith

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D6219

llvm-svn: 221716
2014-11-11 22:03:54 +00:00
Alexey Samsonov 4c1a96f519 Propagate SanitizerKind into CodeGenFunction::EmitCheck() call.
Make sure CodeGenFunction::EmitCheck() knows which sanitizer
it emits check for. Make CheckRecoverableKind enum an
implementation detail and move it away from header.

Currently CheckRecoverableKind is determined by the type of
sanitizer ("unreachable" and "return" are unrecoverable,
"vptr" is always-recoverable, all the rest are recoverable).
This will change in future if we allow to specify which sanitizers
are recoverable, and which are not by -fsanitize-recover= flag.

No functionality change.

llvm-svn: 221635
2014-11-10 22:27:30 +00:00
Alexey Samsonov edf99a92c0 Introduce a SanitizerKind enum to LangOptions.
Use the bitmask to store the set of enabled sanitizers instead of a
bitfield. On the negative side, it makes syntax for querying the
set of enabled sanitizers a bit more clunky. On the positive side, we
will be able to use SanitizerKind to eventually implement the
new semantics for -fsanitize-recover= flag, that would allow us
to make some sanitizers recoverable, and some non-recoverable.

No functionality change.

llvm-svn: 221558
2014-11-07 22:29:38 +00:00
Reid Kleckner 06ea7d6213 Lower __builtin_fabs* to @llvm.fabs.*
mingw64's headers implement fabs by calling __builtin_fabs, so using the
library call results in an infinite loop. If the backend legalizes
@llvm.fabs as a call to fabs later, things should work out, as the crt
provides a definition.

llvm-svn: 221206
2014-11-03 23:52:09 +00:00
Reid Kleckner 4cad00abf3 Remove dead AST type argument to EmitFAbs
llvm-svn: 221205
2014-11-03 23:51:40 +00:00
Alexey Samsonov 035462c1cf Get rid of SanitizerOptions::Disabled global. NFC.
SanitizerOptions is not even a POD now, so having global variable of
this type, is not nice. Instead, provide a regular constructor and clear()
method, and let each CodeGenFunction has its own copy of SanitizerOptions
it uses.

llvm-svn: 220920
2014-10-30 19:33:44 +00:00
Saleem Abdulrasool a25fbef088 CodeGen: add __readfsdword builtin
The Windows NT SDK uses __readfsdword and declares it as a compiler provided
builtin (#pragma intrinsic(__readfsword).  Because intrin.h is not referenced
by winnt.h, it is not possible to provide an out-of-line definition for the
intrinsic.  Provide a proper compiler builtin definition.

llvm-svn: 220859
2014-10-29 16:35:41 +00:00
Matt Arsenault 2174a9dc28 R600: Update for div_fmas intrinsic change
llvm-svn: 220339
2014-10-21 22:21:41 +00:00
Hal Finkel d2208b59cf Add __sync_fetch_and_nand (again)
Prior to GCC 4.4, __sync_fetch_and_nand was implemented as:

  { tmp = *ptr; *ptr = ~tmp & value; return tmp; }

but this was changed in GCC 4.4 to be:

  { tmp = *ptr; *ptr = ~(tmp & value); return tmp; }

in response to this change, support for sync_fetch_and_nand (and
sync_nand_and_fetch) was removed in r99522 in order to avoid miscompiling code
depending on the old semantics. However, at this point:

  1. Many years have passed, and the amount of code relying on the old
     semantics is likely smaller.

  2. Through the work of many contributors, all LLVM backends have been updated
     such that "atomicrmw nand" provides the newer GCC 4.4+ semantics (this process
     was complete July of 2014 (added to the release notes in r212635).

  3. The lack of this intrinsic is now a needless impediment to porting codes
     from GCC to Clang (I've now seen several examples of this).

It is true, however, that we still set GNUC_MINOR to 2 (corresponding to GCC
4.2). To compensate for this, and to address the original concern regarding
code relying on the old semantics, I've added a warning that specifically
details the fact that the semantics have changed and that we provide the newer
semantics.

Fixes PR8842.

llvm-svn: 218905
2014-10-02 20:53:50 +00:00
Jan Vesely b4379f9c2c CGBuiltin: Use frem instruction rather than libcall to implement fmod
AFAICT the semantics of frem match libm's fmod.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 218488
2014-09-26 01:19:41 +00:00
Hal Finkel bcc06085a8 Add __builtin_assume and __builtin_assume_aligned using @llvm.assume.
This makes use of the recently-added @llvm.assume intrinsic to implement a
__builtin_assume(bool) intrinsic (to provide additional information to the
optimizer). This hooks up __assume in MS-compatibility mode to mirror
__builtin_assume (the semantics have been intentionally kept compatible), and
implements GCC's __builtin_assume_aligned as assume((p - o) & mask == 0). LLVM
now contains special logic to deal with assumptions of this form.

llvm-svn: 217349
2014-09-07 22:58:14 +00:00
James Molloy 163b1ba471 [ARMv8] Add support for 32-bit MIN/MAXNM and directed rounding.
This patch adds support for the 32bit numeric max/min and directed round-to-integral NEON intrinsics that were added as part of v8, along with unit tests.

Patch by Graham Hunter!

llvm-svn: 217242
2014-09-05 13:50:34 +00:00
Tom Stellard c4e0c1075b CGBuiltin: Use @llvm.fabs rather than fabs libcall when emitting builtins
Using the intrinsic allows the SelectionDAGBuilder to turn this call
into the FABS Node and also the intrinsic is something the vectorizer knows
how to vectorize.

This patch also sets the readnone attribute on this call, which should
enable additional optmizations.

llvm-svn: 217042
2014-09-03 15:24:29 +00:00
Craig Topper 5fc8fc2d31 Simplify creation of a bunch of ArrayRefs by using None, makeArrayRef or just letting them be implicitly created.
llvm-svn: 216528
2014-08-27 06:28:36 +00:00
Yi Kong 1d268af094 ARM: Add dbg builtin intrinsic
llvm-svn: 216452
2014-08-26 12:48:06 +00:00
Hal Finkel 6208251923 Implement __builtin_signbitl for PowerPC
PowerPC uses the special PPC_FP128 type for long double on Linux, which is
composed of two 64-bit doubles. The higher-order double (which contains the
overall sign) comes first, and so the __builtin_signbitl implementation
requires special handling to extract the sign bit.

Fixes PR20691.

llvm-svn: 216341
2014-08-24 03:47:06 +00:00
Alexey Samsonov 70b9c01bd4 Pass expressions instead of argument ranges to EmitCall/EmitCXXConstructorCall.
Summary:
This is a first small step towards passing generic "Expr" instead of
ArgBeg/ArgEnd pair into EmitCallArgs() family of methods. Having "Expr" will
allow us to get the corresponding FunctionDecl and its ParmVarDecls,
thus allowing us to alter CodeGen depending on the function/parameter
attributes.

No functionality change.

Test Plan: regression test suite

Reviewers: rnk

Reviewed By: rnk

Subscribers: aemerson, cfe-commits

Differential Revision: http://reviews.llvm.org/D4915

llvm-svn: 216214
2014-08-21 20:26:47 +00:00
Matt Arsenault dbb84916d9 R600: Add ldexp intrinsic
llvm-svn: 215738
2014-08-15 17:44:32 +00:00
Yi Kong a5548431a5 AArch64: Prefetch intrinsic
llvm-svn: 215569
2014-08-13 19:18:20 +00:00
Yi Kong 26d104a9ec ARM: Prefetch intrinsics
llvm-svn: 215568
2014-08-13 19:18:14 +00:00
Yi Kong 1083eb5c11 AArch64: Resolve some FIXMEs in CGBuiltin left over from backend merge
Merge vrshr_n_v and vqshlu_n_v with ARM.

Remove FIXME comments for others as they can't actually be shared.

NFC.

Differential Revision: http://reviews.llvm.org/D4697

llvm-svn: 214173
2014-07-29 09:25:17 +00:00
Tim Northover 40956e64f2 AArch64: update Clang for merged arm64/aarch64 triples.
The main subtlety here is that the Darwin tools still need to be given "-arch
arm64" rather than "-arch aarch64". Fortunately this already goes via a custom
function to handle weird edge-cases in other architectures, and it tested.

I removed a few arm64_be tests because that really isn't an interesting thing
to worry about. No-one using big-endian is also referring to the target as
arm64 (at least as far as toolchains go). Mostly they date from when arm64 was
a separate target and we *did* need a parallel name simply to test it at all.
Now aarch64_be is sufficient.

llvm-svn: 213744
2014-07-23 12:32:58 +00:00
Alexey Samsonov 24cad99307 [UBSan] Add !nosanitize metadata to the code generated by UBSan.
This is used to mark the instructions emitted by Clang to implement
variety of UBSan checks. Generally, we don't want to instrument these
instructions with another sanitizers (like ASan).

Reviewed in http://reviews.llvm.org/D4544

llvm-svn: 213291
2014-07-17 18:46:27 +00:00
Hal Finkel 3e49fda0d4 Add basic (noop) CodeGen support for __assume
Clang supports __assume, at least at the semantic level, when MS extensions are
enabled. Unfortunately, trying to actually compile code using __assume would
result in this error:

  error: cannot compile this builtin function yet

__assume is an optimizer hint, and can be ignored at the IR level. Until LLVM
supports assumptions at the IR level, a noop lowering is valid, and that is
what is done here.

llvm-svn: 213206
2014-07-16 22:44:54 +00:00
Matt Arsenault 8587711164 Add codegen for more R600 builtins
llvm-svn: 213079
2014-07-15 17:23:46 +00:00
Yi Kong 4d5e23f53a ARM: Implement __builtin_arm_nop intrinsic
This patch implements __builtin_arm_nop intrinsic for AArch32 and AArch64,
which generates hint 0x0, the alias of NOP instruction.

This intrinsic is necessary to implement ACLE __nop intrinsic.

Differential Revision: http://reviews.llvm.org/D4495

llvm-svn: 212947
2014-07-14 15:20:09 +00:00
Saleem Abdulrasool 572250d60a CodeGen: support hint intrinsics from ACLE on AArch64
This adds support for the ACLE hint intrinsics on AArch64 similar to ARM.  This
is required to properly support ACLE on AArch64.

llvm-svn: 212890
2014-07-12 23:27:22 +00:00
Reid Kleckner ed5d4adb36 MS extension: Make __noop be the integer zero, not void
We still don't accept '__noop;', and we don't consider __noop to be the
integer literal zero.  More work is needed.

llvm-svn: 212839
2014-07-11 20:22:55 +00:00
Saleem Abdulrasool e700cab4e9 CodeGen: add support for a few MSVC ARM intrinsics
This adds support for simple MSVC compatibility mode intrinsics.  These
intrinsics are simple in that they are either directly passed through to the
annotated MSBuiltin intrinsic or they mirror existing GCC builtins.

llvm-svn: 212378
2014-07-05 20:10:05 +00:00
Saleem Abdulrasool 96bfda8dbc CodeGen: add support for MSBuiltin aliases
This completes the infrastructure for the new MSBuiltin aliases in the
instruction definitions.  These behave similar to the GCCBuiltin in that they
can be implicitly constructed without special handling unless needed.

With this change it is possible to annotate an LLVM intrinsic in the backend
instruction definitions and indicate it as a builtin in the Builtin*.def files
in clang via LANGBUILTIN.  That will automatically pass through the instruction
much as a GCCBuiltin.

Note that there is no need for the special handling for ensuring that the
compatibility flag is enabled since the filtering on the LANGBUILTIN will
automatically prevent the intrinsic from bleeding into non-MS compatible
compiler invocations.

llvm-svn: 212359
2014-07-04 21:49:39 +00:00
Saleem Abdulrasool ece7217f70 ARM: rename ARM builtins to use __builtin_arm prefix
This corrects SVN r212196's naming change to use the proper prefix of
`__builtin_arm_` instead of `__builtin_`.

Thanks to Yi Kong for pointing out the incorrect naming!

llvm-svn: 212253
2014-07-03 02:43:20 +00:00
Saleem Abdulrasool 4bddd9d400 CodeGen: make target builtins support languages
This extends the target builtin support to allow language specific annotations
(i.e. LANGBUILTIN).  This is to allow MSVC compatibility whilst retaining the
ability to have EABI targets use a __builtin_ prefix.  This is merely to allow
uniformity in the EABI case where the unprefixed name is provided as an alias in
the header.

llvm-svn: 212196
2014-07-02 17:41:27 +00:00
Tim Northover 3acd6bd0b6 ARM: add support for v8 ldaex/stlex builtins.
ARMv8 adds (to both AArch32 and AArch64) acquiring and releasing
variants of the exclusive operations, in line with the C++11 memory
model.

This adds support for two new intrinsics to expose them to C & C++
developers directly: __builtin_arm_ldaex and __builtin_arm_stlex, in
direct analogy with the versions with no implicit barrier.

rdar://problem/15885451

llvm-svn: 212175
2014-07-02 12:56:02 +00:00
Craig Topper 00bbdcf9b3 Remove llvm:: from uses of ArrayRef.
llvm-svn: 211987
2014-06-28 23:22:23 +00:00
Matt Arsenault 56f008d538 Add R600 builtin codegen.
llvm-svn: 211631
2014-06-24 20:45:01 +00:00
Tim Northover 6ea28bdef5 ARM: remove dead CodeGen functions.
These two are no longer being used by NEON codegen.

llvm-svn: 211586
2014-06-24 12:07:44 +00:00
Jim Grosbach e59c43dc21 Fix spelling. s/overloaed/overloaded/
llvm-svn: 211530
2014-06-23 20:28:43 +00:00
Saleem Abdulrasool 114efe0dc8 CodeGen: improve ms instrincics support
Add support for _InterlockedCompareExchangePointer, _InterlockExchangePointer,
_InterlockExchange.  These are available as a compiler intrinsic on ARM and x86.
These are used directly by the Windows SDK headers without use of the intrin
header.

llvm-svn: 211216
2014-06-18 20:51:10 +00:00
Jim Grosbach 79140826bc AArch64: Support for __builtin_arm_rbit() and __builtin_arm_rbit64().
__builtin_arm_rbit() and __builtin_arm_rbit64().

rdar://9283021

llvm-svn: 211060
2014-06-16 21:56:02 +00:00
Jim Grosbach 171ec34544 ARM: Support for __builtin_arm_rbit() intrinsic.
Reverse the bits in a word. Maps to the RBIT instruction.

rdar://9283021

llvm-svn: 211059
2014-06-16 21:55:58 +00:00
Tim Northover b49b04bbe0 IR-change: cmpxchg operations now return { iN, i1 }.
This is a minimal fix for clang. I'll soon add support for generating
weak variants when requested, but that's not really necessary for the
LLVM change in isolation.

llvm-svn: 210907
2014-06-13 14:24:59 +00:00
Richard Smith 760520bcb7 Add __builtin_operator_new and __builtin_operator_delete, which act like calls
to the normal non-placement ::operator new and ::operator delete, but allow
optimizations like new-expressions and delete-expressions do.

llvm-svn: 210137
2014-06-03 23:27:44 +00:00
Michael J. Spencer 5ce26687f2 [CodeGen] Don't use SizeTy for EmitNeonSplat.
llvm-svn: 210042
2014-06-02 19:48:59 +00:00
Michael J. Spencer dd59775f06 [CodeGen] Don't cast and use SizeTy instead of Int32Ty when constructing {extract,insert} vector element instructions.
llvm-svn: 209942
2014-05-31 00:22:12 +00:00
Tim Northover 573cbee543 AArch64/ARM64: rename ARM64 components to AArch64
This keeps Clang consistent with backend naming conventions.

llvm-svn: 209579
2014-05-24 12:52:07 +00:00
Tim Northover 25e8a6754e AArch64/ARM64: update Clang after AArch64 removal.
A few (mostly CodeGen) parts of Clang were tightly coupled to the
AArch64 backend. Now that it's gone, they will not even compile.

I've also deduplicated RUN lines in many of the AArch64 tests. This
might improve "make check-all" time noticably: some of those NEON
tests were monsters.

llvm-svn: 209578
2014-05-24 12:51:25 +00:00
Craig Topper 8a13c4180e [C++11] Use 'nullptr'. CodeGen edition.
llvm-svn: 209272
2014-05-21 05:09:00 +00:00
Hao Liu 9f9492b657 [ARM64]Fix the bug right shift uint64_t by 64 generates incorrect result.
llvm-svn: 208761
2014-05-14 08:59:30 +00:00
Saleem Abdulrasool 956c2ec532 CodeGen: complete ARM ACLE hint 8.4 support
Add support for the remaining hints from the ACLE.  Although __dbg is listed as
a hint, it is handled different, so it is not covered by this change.

llvm-svn: 207930
2014-05-04 02:52:25 +00:00
Saleem Abdulrasool 38ed6de3a0 CodeGen: rename __builtin_arm_sevl to __sevl
ACLE adds the __sevl() extension.  Rename the hint from a custom name to the
ACLE specified name.

llvm-svn: 207829
2014-05-02 06:53:57 +00:00
James Molloy fa40368d9d [ARM64] Add arm64_be where it was accidentally missed from a bunch of if-conditions.
I think this is the last commit for ARM64 big endian in clang. This commit makes
arm_neon.h compile correctly.

llvm-svn: 207624
2014-04-30 10:11:40 +00:00
Hao Liu a19a2e2da6 [ARM64]Fix a bug cannot select UQSHL/SQSHL with constant i64 shift amount.
llvm-svn: 207401
2014-04-28 07:36:12 +00:00
Saleem Abdulrasool 2b99f2f7a4 CodeGen: remove an unused variable
llvm-svn: 207390
2014-04-28 02:29:11 +00:00
Sylvestre Ledru 464902589e remove useless code
llvm-svn: 207360
2014-04-27 14:57:31 +00:00
Saleem Abdulrasool b9f07e3dbc CodeGen: add __yield intrinsic for ARM
The __yield intrinsic generates a hint instruction to indicate that the thread
is not performing any useful operations at the moment.  This is for
compatibility with MSVC, although, the intrinsic is also part of the ACLE, and
is enabled globally as a result.

llvm-svn: 207275
2014-04-25 21:13:29 +00:00
Saleem Abdulrasool 0fd930e86c CodeGen: replace use of @llvm.arm.sevl with @llvm.arm.hint
Use the new generic @llvm.arm.hint hint intrinsic rather than the specialised
@llvm.arm.sevl hint instruction.

llvm-svn: 207243
2014-04-25 17:25:46 +00:00
Tim Northover b17f9a4609 ARM64: add a few bits of polynomial intrinsic codegen.
llvm-svn: 205303
2014-04-01 12:23:08 +00:00
Tim Northover 74b2def0c5 ARM64: add missing ldN/stN intrinsics and enable tests.
llvm-svn: 205296
2014-04-01 10:37:47 +00:00
Tim Northover 0c68faa455 ARM64: enable aarch64-neon-intrinsics.c test
This adds support for the various NEON intrinsics used by
aarch64-neon-intrinsics.c (originally written for AArch64) and enables the
test.

My implementations are designed to be semantically correct, the actual code
quality looks like its a wash between the two backends, and is frequently
different (hence the large number of CHECK changes).

llvm-svn: 205210
2014-03-31 15:47:09 +00:00
Dmitri Gribenko f461da5306 Remove unused variable
llvm-svn: 205169
2014-03-31 07:52:35 +00:00
Tim Northover 6166ec21be ARM64: remove currently trivial switch statement
llvm-svn: 205167
2014-03-31 07:20:13 +00:00
Tim Northover 97606edd75 ARM64: Fix GCC warning in CGBuiltin.cpp
llvm-svn: 205104
2014-03-29 15:26:07 +00:00
Tim Northover a2ee433c8d ARM64: initial clang support commit.
This adds Clang support for the ARM64 backend. There are definitely
still some rough edges, so please bring up any issues you see with
this patch.

As with the LLVM commit though, we think it'll be more useful for
merging with AArch64 from within the tree.

llvm-svn: 205100
2014-03-29 15:09:45 +00:00
Christian Pirker f01cd6f57b Add ARM big endian Target (armeb, thumbeb)
Reviewed at http://llvm-reviews.chandlerc.com/D3096

llvm-svn: 205008
2014-03-28 14:40:46 +00:00
Reid Kleckner 597e81dea1 -fms-extensions: Add __va_start builtin, which is used for x64
The main difference between __va_start and __builtin_va_start is that
the address of the va_list has already been taken, and the va_list is
always a char*.

__va_end and __va_arg are not needed.

llvm-svn: 204821
2014-03-26 15:38:33 +00:00
Renato Golin c491a8d457 Add support for __builtin___clear_cache in Clang
Adding the mapping between __builtin___clear_cache into @llvm.clear_cache

llvm-svn: 204820
2014-03-26 15:36:05 +00:00
Timur Iskhodzhanov f7af2e6de8 Fix a compile-time warning
lib/CodeGen/CGBuiltin.cpp:3136:12: warning: variable ‘TblPos’ set but not used [-Wunused-but-set-variable]

llvm-svn: 204599
2014-03-24 11:09:01 +00:00
Arnaud A. de Grandmaison 6756a497a1 Cleanup dead assignments reported by scan-build
llvm-svn: 204569
2014-03-23 20:28:07 +00:00
Tim Northover 0622b3a67a Update for IR: add a second AtomicOrdering to cmpxchg insts.
rdar://problem/15996804

llvm-svn: 203560
2014-03-11 10:49:03 +00:00
Ted Kremenek 90097491ed Remove 'break' dominated by 'return' in 'EmitBuiltinExpr'.
llvm-svn: 203080
2014-03-06 05:37:38 +00:00
Tim Northover b44e080dbb AArch64: use less cluttered intrinsic for vtbl/vtbx
The table is always 128-bit so there's no reason to specify it every time we
want the intrinsic.

llvm-svn: 202259
2014-02-26 11:55:15 +00:00
Tim Northover 2df47cedeb AArch64: use different type modifier in arm_neon.td
The 'f' modifier is designed for integer type arguments really (according to
its documentation). It's better to use the "half width, same number" modifier.

Should be no user-visible change.

llvm-svn: 202152
2014-02-25 13:53:01 +00:00
Christian Pirker 9b019ae899 Add AArch64 big endian Target (aarch64_be)
llvm-svn: 202151
2014-02-25 13:51:00 +00:00
Warren Hunt 20e4a5d2af Reapply 201734 but with appropriate gcc compatibility
Because GCC incorrectly defines _mm_prefetch to take anything that casts 
to void*, people have started using that behavior.  The previous patch 
that made _mm_prefetch actually take a const char * broke compatibility 
with existing code.  This update to the patch leaves the macro that 
defines _mm_prefetch with the (void*) cast when _MSC_VER is not defined.

llvm-svn: 201901
2014-02-21 23:08:53 +00:00
Tim Northover a0c95eb2d6 Remove commas at the end of lists (C++11 again)
llvm-svn: 201849
2014-02-21 12:16:59 +00:00
Tim Northover 8fe03d6111 ARM & AArch64: use table for EmitCommonNeonBuiltinExpr
This extends the intrinsic lookup table format slightly, and adds
entries for use the shared ARM/AArch64 definitions. The benefit is
currently smaller than for the SISD intrinsics (there's more custom
code implementing this set), but a few lines are saved and there's
scope for future expansion.

llvm-svn: 201848
2014-02-21 11:57:24 +00:00
Tim Northover 2d83796860 AArch64: refactor table-driven NEON lookup.
This extracts the table-driven intrinsic lookup phase into a separate
function, to be used by EmitCommonNeonBuiltinExpr soon.

It also simplifies the logic used in that lookup, since VectorCastArgN
and ScalarArgN were actually identical.

llvm-svn: 201847
2014-02-21 11:57:20 +00:00
Daniel Jasper 2f0f297bdb Revert r201734 and r201742.
This breaks backwards compatibility with existing code. Previously, this
was defined as

  #define _mm_prefetch(a, sel) (__builtin_prefetch((void *)(a), 0, (sel)))

Which basically accepts any pointer. Changing this to char* simply
breaks a lot of existing code. I have tried changing char* to
"const void*", which seems to be the right thing as per Intel
specification this should work on basically any pointer. However,
apparently this breaks windows compatibility (because of a conflicting
declaration in windows.h).

So, we probably need to #ifdef this based on whether clang is compiling
for windows. According to Chandler, this might be done by introducing an
additional symbol to a fake type in BuiltinsX86.def and then condition
the type expansion on the platform.

llvm-svn: 201775
2014-02-20 11:10:48 +00:00
Warren Hunt 40d6f29ad8 Add _mm_prefetch and some others as MS builtins
This patch adds several built-ins that are required for ms 
compatibility. _mm_prefetch must be a built-in because it takes a 
compile-time constant argument and our prior approach of using a #define 
to the current built-in doesn't work in the presence of re-declaration 
of _mm_prefetch. The others can be obtained by including the windows 
system headers. If a user includes the windows system headers but not 
intrin.h they still need to work and therefore must be built-in because 
we don't get a chance to implement them in intrin.h in this case.

llvm-svn: 201734
2014-02-19 23:20:20 +00:00
Tim Northover db3e5e2408 AArch64: look up EmitAArch64Scalar support before calling.
This fixes one immediate bug where an expression with side-effects
could be emitted twice during a NEON call.

It also prepares the way for folding CodeGen for many of the SISD
intrinsics into a table, reducing code size and hopefully increasing
performance eventually ("binary search + few switch cases" should be
better than "lots of switch cases").

llvm-svn: 201667
2014-02-19 11:55:06 +00:00
Tim Northover 0f6c9d0a9b ARM NEON: add vcvtX (with rounding mode) intrinsics to v8 ARM.
These instructions (well, the f32 ones) are supported on 32-bit ARMv8, not just
AArch64. Now that the arm_neon.td refactoring is complete, adding them is
surprisingly simple.

rdar://problem/16035743

llvm-svn: 201661
2014-02-19 10:37:13 +00:00
Tim Northover 1994fa7d3d ARM & AArch64 NEON: share the vabs implementation.
This changes ARM to use @llvm.fabs for floating-point vabs. Patterns
already existed in the backend, and it might help mid-end phases since
it's more likely to be understood than @llvm.arm.neon.vabs.

llvm-svn: 201313
2014-02-13 10:44:17 +00:00
Tim Northover 02b438754c AArch64: share slgihtly more NEON implementation with ARM.
The s64/u64 vcvt conversion operations are actually pretty much identical to
the s32/u32 ones in implementation, and can be shared with just one extra
variable.

llvm-svn: 201145
2014-02-11 11:27:44 +00:00
Tim Northover d23fc6cceb ARM: move vshll NEON implementation to common code
Now that both ARM backends use the same implementation for vshll operations,
the code can be shared. This is also a necessary LLVM/Clang interface update.

llvm-svn: 201094
2014-02-10 16:20:36 +00:00
Tim Northover a2e0a27d26 ARM: implement vshrn NEON intrinsic in terms of shr/trunc
Now the backend supports the natural LLVM IR, we can shamelessly steal the
AArch64 front-end code to implement the vshrn intrinsic on 32-bit ARM.

llvm-svn: 201086
2014-02-10 14:04:12 +00:00
Tim Northover 7ffb2c5523 ARM & AArch64: combine implementation of vcaXYZ intrinsics
Now that the back-end intrinsics are more regular, there's no need for the
special handling these got in the front-end, so they can be moved to
EmitCommonNeonBuiltinExpr.

llvm-svn: 200769
2014-02-04 14:55:52 +00:00
Tim Northover 02e38609e7 ARM: implement support for crypto intrinsics in arm_neon.h
llvm-svn: 200708
2014-02-03 17:28:04 +00:00
Tim Northover 51ab388266 AArch64: use new non-polymorphic crypto intrinsics
The LLVM backend now has invariant types on the various crypto-intrinsics,
because in all cases there's only really one interpretation.

llvm-svn: 200707
2014-02-03 17:28:00 +00:00
Tim Northover 5309111c22 ARM & AArch64: unify the rest of the completely shared NEON implementations
This should be the last routine patch: AArch64 does still delegate to
EmitARMBuiltinExpr, but the remaining instances have complications of
one sort or another so some more cunning thought will be needed.

llvm-svn: 200528
2014-01-31 10:46:52 +00:00
Tim Northover ba1e344d90 ARM & AArch64: another block of miscellaneous NEON sharing.
llvm-svn: 200527
2014-01-31 10:46:49 +00:00
Tim Northover 027b4ee607 ARM & AArch64: move shared vld/vst intrinsics to common implementation.
llvm-svn: 200526
2014-01-31 10:46:45 +00:00
Tim Northover 9d3ab5fe9f ARM & AArch64: more instructions into common block
llvm-svn: 200525
2014-01-31 10:46:41 +00:00
Tim Northover 61fc835d6e ARM & AArch64: merge another NEON block completely.
llvm-svn: 200524
2014-01-31 10:46:36 +00:00
Tim Northover 58c4474dea ARM & AArch64: extend shared NEON implementation to first block.
This extends the refactoring to the whole of the first block of
trivial correspondences (as a fairly arbitrary boundary).

llvm-svn: 200472
2014-01-30 14:48:01 +00:00
Tim Northover ac85c341ae ARM & AArch64: fully share NEON implementation of permutation intrinsics
As a starting point, this moves the CodeGen for NEON permutation
instructions (vtrn, vzip, vuzp) into a new shared function.

llvm-svn: 200471
2014-01-30 14:47:57 +00:00
Tim Northover c322f838bc ARM & AArch64: share the BI__builtin_neon enum defs.
llvm-svn: 200470
2014-01-30 14:47:51 +00:00
Kevin Qin ce1f0e85ba [AArch64 NEON] Fix a bug about vcles_f32 and vcled_f64.
As vcles_f32() and vcled_f64 are implemented by FCMGE, operands
should make a swap.

llvm-svn: 199866
2014-01-23 03:42:06 +00:00
Hao Liu f96fd37888 [AArch64]The compare to zero intrinsics should be implemented by 'icmp/fcmp' and 'sext' not 'zext'. Modify the implementation by replacing zext with sext.
llvm-svn: 197898
2013-12-23 02:44:00 +00:00
Chad Rosier 6030c84a2f [AArch64] Refactor NEON floating-point Max/Min/Maxnm/Minnm across vector AArch64
intrinsics to use f32 types, rather than their vector equivalents.

llvm-svn: 197091
2013-12-11 23:21:39 +00:00
Chad Rosier c520fce72d [AArch64] Add NEON scalar floating-point compare LLVM AArch64 intrinsics that
use f32/f64 types, rather than their vector equivalents.

llvm-svn: 197071
2013-12-11 21:03:56 +00:00
Chad Rosier edd4403510 [AArch64] Refactor the NEON scalar floating-point reciprocal step and
floating-point reciprocal square root step LLVM AArch64 intrinsics to
use f32/f64 types, rather than their vector equivalents.

llvm-svn: 197070
2013-12-11 21:03:54 +00:00
Chad Rosier 6ce4387c5c [AArch64] Refactor the NEON scalar floating-point reciprocal estimate, floating-
point reciprocal exponent, and floating-point reciprocal square root estimate
LLVM AArch64 intrinsics to use f32/f64 types, rather than their vector
equivalents.

llvm-svn: 197069
2013-12-11 21:03:52 +00:00
Chad Rosier 17c248a7a2 [AArch64] Refactor the NEON floating-point absolute difference LLVM AArch64
intrinsic to use f32/f64 types, rather than their vector equivalents.

llvm-svn: 196969
2013-12-10 21:34:23 +00:00
Chad Rosier 37051a80e9 [AArch64] Refactor the NEON signed/unsigned floating-point convert to fixed-point
LLVM AArch64 intrinsics to use f32/f64, rather than their vector equivalents.

llvm-svn: 196968
2013-12-10 21:34:21 +00:00
Chad Rosier 8f6f3d124c [AArch64] Overload NEON signed/unsigned floating-point convert to fixed-point
and fixed-point convert to floating-point LLVM AArch64 intrinsics.

llvm-svn: 196967
2013-12-10 21:34:20 +00:00
Chad Rosier 11a78c86e1 [AArch64] Overload NEON signed/unsigned integer convert to floating-point
LLVM AArch64 intrinsics.

llvm-svn: 196966
2013-12-10 21:34:17 +00:00
Chad Rosier 8d96c803df [AArch64] Refactor the redundant code in the EmitAArch64ScalarBuiltinExpr()
function.  No functional change intended.

llvm-svn: 196936
2013-12-10 17:44:36 +00:00
Chad Rosier 58f6a1fee7 [AArch64] Refactor the Neon vector/scalar floating-point convert intrinsics so
that they use float/double rather than the vector equivalents when appropriate.

llvm-svn: 196931
2013-12-10 16:11:55 +00:00
Chad Rosier ff3b79aead [AArch64] Refactor the Neon vector/scalar floating-point convert implementation.
Specifically, reuse the ARM intrinsics when possible.

llvm-svn: 196927
2013-12-10 15:35:40 +00:00
Kevin Qin fb79d7f843 [AArch64 NEON] Support poly128_t and implement relevant intrinsic.
llvm-svn: 196888
2013-12-10 06:49:01 +00:00
Chad Rosier ce511f2fcb [AArch64] Refactor the NEON scalar reduce pairwise intrinsics so that they use
float/double rather than the vector equivalents when appropriate.

llvm-svn: 196836
2013-12-09 22:47:59 +00:00
Chad Rosier 01703584eb [AArch64] Refactor the NEON scalar reduce pairwise front-end codegen to remove
unnecessary patterns in tablegen.

llvm-svn: 196835
2013-12-09 22:47:57 +00:00
Chad Rosier ad3683c3cb [AArch64] Remove q and non-q intrinsic definitions from the NEON scalar reduce
pairwise implementation, using an overloaded definition instead.

llvm-svn: 196834
2013-12-09 22:47:55 +00:00
Hao Liu 844a7da243 [AArch64]Add missing pair intrinsics such as:
int32_t vminv_s32(int32x2_t a) 
which should be compiled into SMINP Vd.2S,Vn.2S,Vm.2S

llvm-svn: 196750
2013-12-09 03:52:22 +00:00
Kevin Qin ad53b87c70 [AArch64 NEON] Add ACLE intrinsic vceqz_f64.
llvm-svn: 196361
2013-12-04 08:02:11 +00:00
Kevin Qin 8903f8df4b [AArch64 NEON] Add missing compare intrinsics.
llvm-svn: 196359
2013-12-04 07:53:09 +00:00
Hao Liu a5246fde90 [AArch64]Add missing floating point convert, round and misc intrinsics.
E.g. int64x1_t vcvt_s64_f64(float64x1_t a) -> FCVTZS Dd, Dn

llvm-svn: 196211
2013-12-03 06:07:13 +00:00
Hao Liu 4b850c5e0d revert r196152.
This is a duplicate implementation.
E.g. this patch defines:
     float64_t vabd_f64(float64_t a, float64_t b)
But there is already a similar intrinsic "vabdd_f64" with the same types.
Also, this intrinsic will be conflicted to the vector type intrinsic as following(Which is implemented by me and will be committed to trunk):
     float64x1_t vabd_f64(float64x1_t a, float64x1_t b).
Two functions shouldn't have a same name in arm_neon.h.
According to ARM ACLE document, such vabd_f64 with float64_t is not existing.
So I revert this commit.

llvm-svn: 196205
2013-12-03 05:35:17 +00:00
Hao Liu ce258820ca AArch64: Add missing scalar pair intrinsics.
E.g. "float32_t vaddv_f32(float32x2_t a)" to be matched into "faddp s0, v1.2s".

llvm-svn: 196199
2013-12-03 03:40:08 +00:00
Chad Rosier b0574f3bf7 [AArch64] Add missing NEON scalar floating-point to integer convert ACLEs.
llvm-svn: 196152
2013-12-02 21:07:24 +00:00
Hao Liu 8a0099e02c Fix the problem that the range check for scalar narrow shift is too wide.
E.g. the immediate value of vshrns_n_s16 is [1,16], which should be [1,8].

llvm-svn: 195942
2013-11-29 02:13:17 +00:00
Chad Rosier 9e59285cc8 [AArch64] Add support for NEON scalar floating-point absolute difference.
llvm-svn: 195804
2013-11-27 01:46:19 +00:00
Chad Rosier 52e31b20cb [AArch64] Add support for NEON scalar floating-point to integer convert
instructions.

llvm-svn: 195789
2013-11-26 22:17:51 +00:00
Ana Pazos dbd1a22496 Implemented Neon scalar vdup_lane intrinsics.
Fixed scalar dup alias and added test case.

llvm-svn: 195329
2013-11-21 08:15:01 +00:00
Ana Pazos 2b02688fd9 Implemented Neon scalar by element intrinsics.
Intrinsics implemented: vqdmull_lane, vqdmulh_lane, vqrdmulh_lane,
vqdmlal_lane, vqdmlsl_lane scalar Neon intrinsics.

llvm-svn: 195326
2013-11-21 07:36:33 +00:00
Hao Liu 171cedf61e Implement AArch64 neon instructions class SIMD lsone and SIMD lone-post.
llvm-svn: 195079
2013-11-19 02:17:31 +00:00
Hao Liu 5e4ce1ae9d Implement the newly added AArch64 ACLE functions for ld1/st1 with 2/3/4 vectors.
The functions are like: vst1_s8_x2 ...

llvm-svn: 194991
2013-11-18 06:33:43 +00:00
Benjamin Kramer 847c1d90e1 Remove unused but set variable.
llvm-svn: 194920
2013-11-16 11:47:52 +00:00
Ana Pazos 6f2a47a9e5 Implemented aarch64 Neon scalar vmulx_lane intrinsics
Implemented aarch64 Neon scalar vfma_lane intrinsics
Implemented aarch64 Neon scalar vfms_lane intrinsics

Implemented legacy vmul_n_f64, vmul_lane_f64, vmul_laneq_f64
intrinsics (v1f64 parameter type) using Neon scalar instructions.

Implemented legacy vfma_lane_f64, vfms_lane_f64,
vfma_laneq_f64, vfms_laneq_f64 intrinsics (v1f64 parameter type)
using Neon scalar instructions.

llvm-svn: 194889
2013-11-15 23:33:31 +00:00
Chad Rosier 7aaee48bf0 [AArch64] Add support for legacy AArch32 NEON scalar shift right by immediate
and accumulate instructions.

llvm-svn: 194732
2013-11-14 22:02:24 +00:00
Kevin Qin caac85e612 [AArch64 neon] support poly64 and relevant intrinsic functions.
llvm-svn: 194660
2013-11-14 03:29:16 +00:00
Kevin Qin 1718af6f0a Implement aarch64 neon instruction class misc.
llvm-svn: 194657
2013-11-14 02:45:18 +00:00
Jiangning Liu 18b707cb3f Implement AArch64 NEON instruction set AdvSIMD (table).
llvm-svn: 194649
2013-11-14 01:57:55 +00:00
Reid Kleckner 59e4a6f5e2 -fms-extensions: Recognize _alloca as an alias for the alloca builtin
Differential Revision: http://llvm-reviews.chandlerc.com/D1989

llvm-svn: 194617
2013-11-13 22:58:53 +00:00
Chad Rosier e714a962b5 [AArch64] Tests for legacy AArch32 NEON scalar shift by immediate instructions.
A number of non-overloaded intrinsics have been replaced by thier overloaded
counterparts.

llvm-svn: 194599
2013-11-13 20:05:44 +00:00
Chad Rosier 249c714bb4 [AArch64] Add support for NEON scalar floating-point convert to fixed-point instructions.
llvm-svn: 194395
2013-11-11 18:04:22 +00:00
Jiangning Liu c628af66c7 Implement AArch64 Neon instruction set Perm.
llvm-svn: 194124
2013-11-06 03:35:53 +00:00
Jiangning Liu 37f5bb1b28 Implement AArch64 Neon instruction set Bitwise Extract.
llvm-svn: 194119
2013-11-06 02:26:12 +00:00
Jiangning Liu 34a7109b47 Implement AArch64 Neon Crypto instruction classes AES, SHA, and 3 SHA.
llvm-svn: 194086
2013-11-05 17:42:24 +00:00
Kevin Qin 9eece7b5e0 Implemented aarch64 neon intrinsic vcopy_lane with float type.
llvm-svn: 194042
2013-11-05 02:05:44 +00:00
Chad Rosier 74329d6cff [AArch64] Add support for NEON scalar fixed-point convert to floating-point instructions.
llvm-svn: 193817
2013-10-31 22:37:08 +00:00
Chad Rosier bdca387884 [AArch64] Add support for NEON scalar shift immediate instructions.
llvm-svn: 193791
2013-10-31 19:29:05 +00:00
Mark Lacey a8e7df3602 Add CodeGenABITypes.h for use in LLDB.
CodeGenABITypes is a wrapper built on top of CodeGenModule that exposes
some of the functionality of CodeGenTypes (held by CodeGenModule),
specifically methods that determine the LLVM types appropriate for
function argument and return values.

I addition to CodeGenABITypes.h, CGFunctionInfo.h is introduced, and the
definitions of ABIArgInfo, RequiredArgs, and CGFunctionInfo are moved
into this new header from the private headers ABIInfo.h and CGCall.h.

Exposing this functionality is one part of making it possible for LLDB
to determine the actual ABI locations of function arguments and return
values, making it possible for it to determine this for any supported
target without hard-coding ABI knowledge in the LLDB code.

llvm-svn: 193717
2013-10-30 21:53:58 +00:00
Chad Rosier 4d55e6e0a4 [AArch64] Add support for NEON scalar floating-point compare instructions.
llvm-svn: 193692
2013-10-30 15:20:07 +00:00
Peter Collingbourne b453cd64a7 Implement function type checker for the undefined behavior sanitizer.
This uses function prefix data to store function type information at the
function pointer.

Differential Revision: http://llvm-reviews.chandlerc.com/D1338

llvm-svn: 193058
2013-10-20 21:29:19 +00:00
Chad Rosier 3c03dee1d1 [AArch64] Add support for NEON scalar extract narrow instructions.
llvm-svn: 192971
2013-10-18 14:03:36 +00:00
Chad Rosier e7465644c6 [AArch64] Add support for NEON scalar three register different instruction
class.  The instruction class includes the signed saturating doubling
multiply-add long, signed saturating doubling multiply-subtract long, and
the signed saturating doubling multiply long instructions.

llvm-svn: 192909
2013-10-17 18:12:50 +00:00
Chad Rosier 00eef17dbe [AArch64] Add support for NEON scalar negate instruction.
llvm-svn: 192845
2013-10-16 21:04:53 +00:00
Chad Rosier e904137c01 [AArch64] Add support for NEON scalar absolute value instruction.
llvm-svn: 192844
2013-10-16 21:04:49 +00:00
Chad Rosier 2681b3fb61 Update comment.
llvm-svn: 192807
2013-10-16 16:30:39 +00:00
Chad Rosier 069b90463d [AArch64] Add support for NEON scalar signed saturating accumulated of unsigned
value and unsigned saturating accumulate of signed value instructions.

llvm-svn: 192801
2013-10-16 16:09:16 +00:00
Chad Rosier a70fb7b716 [AArch64] Add support for NEON scalar signed saturating absolute value and
scalar signed saturating negate instructions.

llvm-svn: 192734
2013-10-15 21:19:02 +00:00
Chad Rosier 193573ec89 [AArch64] Add support for NEON scalar integer compare instructions.
llvm-svn: 192597
2013-10-14 14:37:40 +00:00
Kevin Qin f22bf50443 Implemented aarch64 SIMD copy related ACLE intrinsic :
vget_lane, vset_lane, vcopy_lane, vcreate, vdup_n, vdup_lane, vmov_n.

llvm-svn: 192411
2013-10-11 02:34:30 +00:00
Hao Liu 1eade6d927 Implement AArch64 vector load/store multiple N-element structure class SIMD(lselem).
Including following 14 instructions:
4 ld1 insts: load multiple 1-element structure to sequential 1/2/3/4 registers.
ld2/ld3/ld4: load multiple N-element structure to sequential N registers (N=2,3,4).
4 st1 insts: store multiple 1-element structure from sequential 1/2/3/4 registers.
st2/st3/st4: store multiple N-element structure from sequential N registers (N = 2,3,4).

llvm-svn: 192362
2013-10-10 17:01:49 +00:00
Tim Northover 72ace5cf12 Revert "Implement AArch64 vector load/store multiple N-element structure class SIMD(lselem). "
This reverts commit r192351. The LLVM side broke the build and the Clang tests
will inevitably fail without it.

llvm-svn: 192356
2013-10-10 16:00:08 +00:00
Hao Liu c319193636 Implement AArch64 vector load/store multiple N-element structure class SIMD(lselem).
Including following 14 instructions:
4 ld1 insts: load multiple 1-element structure to sequential 1/2/3/4 registers.
ld2/ld3/ld4: load multiple N-element structure to sequential N registers (N=2,3,4).
4 st1 insts: store multiple 1-element structure from sequential 1/2/3/4 registers.
st2/st3/st4: store multiple N-element structure from sequential N registers (N = 2,3,4).

E.g. ld1(3 registers version) will load 32-bit elements {A, B, C, D, E, F} sequentially into the three 64-bit vectors list {BA, DC, FE}.
E.g. ld3 will load 32-bit elements {A, B, C, D, E, F} into the three 64-bit vectors list {DA, EB, FC}.

llvm-svn: 192351
2013-10-10 14:59:36 +00:00
Chad Rosier 0a903478c6 [AArch64] Add support for NEON scalar floating-point reciprocal estimate,
reciprocal exponent, and reciprocal square root estimate instructions.

llvm-svn: 192243
2013-10-08 22:09:29 +00:00
Chad Rosier 0babda4b9c [AArch64] Add support for NEON scalar signed/unsigned integer to floating-point
convert instructions.

llvm-svn: 192232
2013-10-08 20:43:46 +00:00
Matt Arsenault 2f15263807 Fix objectsize tests after r192117
llvm-svn: 192120
2013-10-07 19:00:18 +00:00
Chad Rosier 027dfade54 [AArch64] Add support for NEON scalar arithmetic instructions:
SQDMULH, SQRDMULH, FMULX, FRECPS, and FRSQRTS.

llvm-svn: 192112
2013-10-07 17:07:17 +00:00
Jiangning Liu b96ebac02b Implement aarch64 neon instruction set AdvSIMD (Across).
llvm-svn: 192029
2013-10-05 08:22:55 +00:00
Amaury de la Vieuville 21bf6ed730 Do not emit undefined lsrh/ashr for NEON shifts
These IR instructions are undefined when the amount is equal to operand
size, but NEON right shifts support such shifts. Work around that by
emitting a different IR in these cases.

llvm-svn: 191953
2013-10-04 13:13:15 +00:00
Jiangning Liu 4617e9dc85 Implement aarch64 neon instruction set AdvSIMD (3V elem).
llvm-svn: 191945
2013-10-04 09:21:17 +00:00
Joey Gouly 75987a65f3 [ARM] Add a builtin to allow you to use the 'sevl' instruction.
llvm-svn: 191816
2013-10-02 10:00:18 +00:00
Benjamin Kramer 9b1dfe8b56 Mark an impossible path as unreachable to pacify GCC.
llvm-svn: 191436
2013-09-26 16:36:08 +00:00
Benjamin Kramer 39c4924db9 Remove tabs.
llvm-svn: 191427
2013-09-26 12:16:47 +00:00
NAKAMURA Takumi 788af10a8a CGBuiltin.cpp: Prune a stray default: label. [-Wcovered-switch-default]
llvm-svn: 191277
2013-09-24 04:37:50 +00:00
Jiangning Liu 036f16dc8c Initial support for Neon scalar instructions.
Patch by Ana Pazos.

1.Added support for v1ix and v1fx types.
2.Added Scalar Pairwise Reduce instructions.
3.Added initial implementation of Scalar Arithmetic instructions.

llvm-svn: 191264
2013-09-24 02:48:06 +00:00
Eli Friedman f9d8c6cebb Add _mm_stream_si64 intrinsic.
While I'm here, also fix the alignment computation for the whole family of
intrinsics.

PR17298.

llvm-svn: 191243
2013-09-23 23:38:39 +00:00
Joey Gouly 1e8637b259 [ARMv8] Add builtins for CRC instructions.
Patch by Bradley Smith!

llvm-svn: 190931
2013-09-18 10:07:09 +00:00
Hal Finkel 28b2ae3692 Restore the sqrt -> llvm.sqrt mapping in fast-math mode
This restores the sqrt -> llvm.sqrt mapping, but only in fast-math mode
(specifically, when the UnsafeFPMath or NoNaNsFPMath CodeGen options are
enabled). The @llvm.sqrt* intrinsics have slightly different semantics from the
libm call, specifically, they are undefined when given a non-zero negative
number (the libm calls will always return NaN for any negative number).

This mapping was removed in r100613, and replaced with a TODO, but at that time
the fast-math flags were not yet implemented. Now that we have these, restoring
this mapping is important because it will enable autovectorization of sqrt
calls in loops (at least in fast-math mode).

llvm-svn: 190646
2013-09-12 23:57:55 +00:00
Jiangning Liu 1bda93a252 Implement aarch64 neon instruction set AdvSIMD (3V Diff), covering the following 26 instructions,
SADDL, UADDL, SADDW, UADDW, SSUBL, USUBL, SSUBW, USUBW, ADDHN, RADDHN, SABAL, UABAL, SUBHN, RSUBHN, SABDL, UABDL, SMLAL, UMLAL, SMLSL, UMLSL, SQDMLAL, SQDMLSL, SMULL, UMULL, SQDMULL, PMULL

llvm-svn: 190289
2013-09-09 02:21:08 +00:00
Hao Liu b1852eed38 Inplement aarch64 neon instructions in AdvSIMD(shift). About 24 shift instructions:
sshr,ushr,ssra,usra,srshr,urshr,srsra,ursra,sri,shl,sli,sqshlu,sqshl,uqshl,shrn,sqrshr$
 and 4 convert instructions:
      scvtf,ucvtf,fcvtzs,fcvtzu

llvm-svn: 189926
2013-09-04 09:29:13 +00:00
Tim Northover 550ce58312 ARM: comment on why vmull intrinsic has to exist for now.
llvm-svn: 189464
2013-08-28 09:46:40 +00:00
Tim Northover 4ae9812283 ARM: Emit normal IR for vaddhn/vsubhn NEON intrinsics
These operations "vector add high-half narrow" actually correspond to the
sequence:

    %sum = add <4 x i32> %lhs, %rhs
    %high = lshr <4 x i32> %sum, <i32 16, i32 16, i32 16, i32 16>
    %res = trunc <4 x i32> %high to <4 x i16>

Now that LLVM can spot this, Clang should emit the corresponding LLVM IR.

llvm-svn: 189463
2013-08-28 09:46:37 +00:00
Tim Northover 4e423f724a ARM: use vqdmull and vqadds/vqsubs to implement vqdmlal/vqdmlsl
The NEON intrinsics vqdmlal and vqdmlsl are really just combinations of a
saturating-doubling-multiply (vqdmull) and a saturating add/sub, so now that
LLVM can spot those patterns Clang should emit them instead of specialised
intrinsics.

Feature already tested by existing ARM NEON intrinsics tests.

llvm-svn: 189462
2013-08-28 09:46:34 +00:00
Juergen Ributzka 53e2f275d2 Fix last commit.
llvm-svn: 188724
2013-08-19 23:08:53 +00:00
Juergen Ributzka c6ab1f8bfd Simplify code by using CreateMemTemp. No functional change intended.
Reviewer: Eli
llvm-svn: 188722
2013-08-19 22:20:37 +00:00
Juergen Ributzka 2c2dbf4542 Fix the name and the type of the argument for intrinisc
_mm256_broadcastsi128_si256 to align with the Intel documentation.

This fixes bug PR 16581 and rdar:14747994.

llvm-svn: 188609
2013-08-17 16:40:09 +00:00
Hao Liu 0e9837a385 Fix the build failure of Realease version
llvm-svn: 188456
2013-08-15 11:38:54 +00:00
Hao Liu 4efa1402fe Clang and AArch64 backend patches to support shll/shl and vmovl instructions and ACLE functions
llvm-svn: 188452
2013-08-15 08:26:30 +00:00
Tim Northover 2fe823a6c3 AArch64: initial NEON support
Patch by Ana Pazos

- Completed implementation of instruction formats:
AdvSIMD three same
AdvSIMD modified immediate
AdvSIMD scalar pairwise

- Completed implementation of instruction classes
(some of the instructions in these classes
belong to yet unfinished instruction formats):
Vector Arithmetic
Vector Immediate
Vector Pairwise Arithmetic

- Initial implementation of instruction formats:
AdvSIMD scalar two-reg misc
AdvSIMD scalar three same

- Intial implementation of instruction class:
Scalar Arithmetic

- Initial clang changes to support arm v8 intrinsics.
Note: no clang changes for scalar intrinsics function name mangling yet.

- Comprehensive test cases for added instructions
To verify auto codegen, encoding, decoding, diagnosis, intrinsics.

llvm-svn: 187568
2013-08-01 09:23:19 +00:00
Bill Schmidt 778d387684 [PowerPC] Support powerpc64le as a syntax-checking target.
This patch provides basic support for powerpc64le as an LLVM target.
However, use of this target will not actually generate little-endian
code.  Instead, use of the target will cause the correct little-endian
built-in defines to be generated, so that code that tests for
__LITTLE_ENDIAN__, for example, will be correctly parsed for
syntax-only testing.  Code generation will otherwise be the same as
powerpc64 (big-endian), for now.

The patch leaves open the possibility of creating a little-endian
PowerPC64 back end, but there is no immediate intent to create such a
thing.

The new test case variant ensures that correct built-in defines for
little-endian code are generated.

llvm-svn: 187180
2013-07-26 01:36:11 +00:00
Eli Bendersky c3496b0643 Partial revert of r185568.
r186899 and r187061 added a preferred way for some architectures not to get
intrinsic generation for math builtins. So the code changes in r185568 can
now be undone (the test remains).

llvm-svn: 187079
2013-07-24 21:22:01 +00:00
Tim Northover 6aacd49094 ARM: implement low-level intrinsics for the atomic exclusive operations.
This adds three overloaded intrinsics to Clang:
    T __builtin_arm_ldrex(const volatile T *addr)
    int __builtin_arm_strex(T val, volatile T *addr)
    void __builtin_arm_clrex()

The intent is that these do what users would expect when given most sensible
types. Currently, "sensible" translates to ints, floats and pointers.

llvm-svn: 186394
2013-07-16 09:47:53 +00:00
Richard Smith 6cbd65d84d Add a __builtin_addressof that performs the same functionality as the built-in
& operator (ignoring any overloaded operator& for the type). The purpose of
this builtin is for use in std::addressof, to allow it to be made constexpr;
the existing implementation technique (reinterpret_cast to some reference type,
take address, reinterpert_cast back) does not permit this because
reinterpret_cast between reference types is not permitted in a constant
expression in C++11 onwards.

llvm-svn: 186053
2013-07-11 02:27:57 +00:00
Eli Bendersky 9b64ec18c1 Add target hook CodeGen queries when generating builtin pow*.
Without fmath-errno, Clang currently generates calls to @llvm.pow.* intrinsics
when it sees pow*(). This may not be suitable for all targets (for
example le32/PNaCl), so the attached patch adds a target hook that CodeGen
queries. The target can state its preference for having or not having the
intrinsic generated. Non-PNaCl behavior remains unchanged;
PNaCl-specific test added.

llvm-svn: 185568
2013-07-03 19:19:12 +00:00
Eli Bendersky 099888eccd Remove misplaced comment
llvm-svn: 184862
2013-06-25 17:07:56 +00:00
Michael Gottesman 930ecdb77b [checked-arithmetic builtins] Added builtins to enable users to perform checked-arithmetic in c.
This will enable users in security critical applications to perform
checked-arithmetic in a fast safe manner that is amenable to c.

Tests/an update to Language Extensions is included as well.

rdar://13421498.

llvm-svn: 184497
2013-06-20 23:28:10 +00:00
Michael Gottesman 1534399059 [multiprecision-builtins] Added missing builtin __builtin_{add,sub}cb for {add,sub} with carry for bytes.
I have had several people ask me about why this builtin was not available in
clang (since it seems like a logical conclusion). This patch implements said
builtins.

Relevant tests are included as well. I also updated the Clang language extension reference.

rdar://14192664.

llvm-svn: 184227
2013-06-18 20:40:40 +00:00
Rafael Espindola 2219fc5821 Fix __clear_cache on ARM.
Current gcc's produce an error if __clear_cache is anything but

__clear_cache(char *a, char *b);

It looks like we had just implemented a gcc bug that is now fixed.

llvm-svn: 181784
2013-05-14 12:45:47 +00:00
Benjamin Kramer 4757d0aadf Revert accidental commit.
llvm-svn: 181782
2013-05-14 12:23:08 +00:00
Benjamin Kramer 324bf7a159 Take a stab at trying to unbreak the makefile build.
There is no clangRewrite.a.

llvm-svn: 181781
2013-05-14 12:21:21 +00:00
Tim Northover 8ec8c4bf89 AArch64: teach Clang about __clear_cache intrinsic
libgcc provides a __clear_cache intrinsic on AArch64, much like it
does on 32-bit ARM.

llvm-svn: 181111
2013-05-04 07:15:13 +00:00
John McCall c8e0170578 Standardize accesses to the TargetInfo in IR-gen.
Patch by Stephen Lin!

llvm-svn: 179638
2013-04-16 22:48:15 +00:00
Michael Liao ffaae3511a Add RDSEED intrinsic support defined in AVX2 extension
llvm-svn: 178331
2013-03-29 05:17:55 +00:00
John McCall 47fb950871 Change hasAggregateLLVMType, which conflates complex and
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.

Also, normalize the API for loading and storing complexes.

I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.

llvm-svn: 176656
2013-03-07 21:37:08 +00:00
John McCall 882987f30c Use the actual ABI-determined C calling convention for runtime
calls and declarations.

LLVM has a default CC determined by the target triple.  This is
not always the actual default CC for the ABI we've been asked to
target, and so we sometimes find ourselves annotating all user
functions with an explicit calling convention.  Since these
calling conventions usually agree for the simple set of argument
types passed to most runtime functions, using the LLVM-default CC
in principle has no effect.  However, the LLVM optimizer goes
into histrionics if it sees this kind of formal CC mismatch,
since it has no concept of CC compatibility.  Therefore, if this
module happens to define the "runtime" function, or got LTO'ed
with such a definition, we can miscompile;  so it's quite
important to get this right.

Defining runtime functions locally is quite common in embedded
applications.

llvm-svn: 176286
2013-02-28 19:01:20 +00:00
Will Dietz f54319c891 [ubsan] Add support for -fsanitize-blacklist
llvm-svn: 172808
2013-01-18 11:30:38 +00:00
Tim Northover 4ef746768b Correct order of operands forwarding NEON vfma to LLVM fma
llvm-svn: 172650
2013-01-16 20:13:15 +00:00
Michael Gottesman a2b5c4ba6a Multiprecision subtraction builtins.
We lower these into 2x chained usub.with.overflow intrinsics.

llvm-svn: 172476
2013-01-14 21:44:30 +00:00
NAKAMURA Takumi 7ab4fbf5c2 CGBuiltin.cpp: Fix abuse of ArrayRef in EmitOverflowIntrinsic().
In ArrayRef<T>(X), X should not be temporary value. It could be rewritten more redundantly;

  llvm::Type *XTy = X->getType();
  ArrayRef<llvm::Type *> Ty(XTy);
  llvm::Value *Callee = CGF.CGM.getIntrinsic(IntrinsicID, Ty);

Since it is safe if both XTy and Ty are temporary value in one statement, it could be shorten;

  llvm::Value *Callee = CGF.CGM.getIntrinsic(IntrinsicID, ArrayRef<llvm::Type*>(X->getType()));

ArrayRef<T> has an implicit constructor to create uni-entry of T;

  llvm::Value *Callee = CGF.CGM.getIntrinsic(IntrinsicID, X->getType());

MSVC-generated clang.exe crashed.

llvm-svn: 172352
2013-01-13 11:26:44 +00:00
Michael Gottesman 54398015bf Added builtins for multiprecision adds.
We lower all of these intrinsics into a 2x chained usage of
uadd.with.overflow.

llvm-svn: 172341
2013-01-13 02:22:39 +00:00
Dmitri Gribenko f857950d39 Remove useless 'llvm::' qualifier from names like StringRef and others that are
brought into 'clang' namespace by clang/Basic/LLVM.h

llvm-svn: 172323
2013-01-12 19:30:44 +00:00
Chandler Carruth ffd5551bc7 Rewrite #includes for llvm/Foo.h to llvm/IR/Foo.h as appropriate to
reflect the migration in r171366.

Re-sort the #include lines to reflect the new paths.

llvm-svn: 171369
2013-01-02 11:45:17 +00:00
Meador Inge b97878a235 CodeGen: Expand creal and cimag into complex field loads
PR 14529 was opened because neither Clang or LLVM was expanding
calls to creal* or cimag* into instructions that just load the
respective complex field.  After some discussion, it was not
considered realistic to do this in LLVM because of the platform
specific way complex types are expanded.  Thus a way to solve
this in Clang was pursued.  GCC does a similar expansion.

This patch adds the feature to Clang by making the creal* and
cimag* functions library builtins and modifying the builtin code
generator to look for the new builtin types.

llvm-svn: 170455
2012-12-18 20:58:04 +00:00
Chandler Carruth 3a02247dc9 Sort all of Clang's files under 'lib', and fix up the broken headers
uncovered.

This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.

I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.

llvm-svn: 169237
2012-12-04 09:13:33 +00:00
Will Dietz 88e0233ff4 [ubsan] Add flag to enable recovery from checks when possible.
llvm-svn: 169114
2012-12-02 19:50:33 +00:00
Richard Smith b1b0ab41e7 Use the individual -fsanitize=<...> arguments to control which of the UBSan
checks to enable. Remove frontend support for -fcatch-undefined-behavior,
-faddress-sanitizer and -fthread-sanitizer now that they don't do anything.

llvm-svn: 167413
2012-11-05 22:21:05 +00:00
Micah Villmow ea2fea2a60 Cleanup some clang code to use new type functions instead of using cast<>.
llvm-svn: 166684
2012-10-25 15:39:14 +00:00
Nico Weber 636fc09960 "Implement" codegen support for __noop().
Eli discovered that __noop's sema behavior also needs some love. I filed
PR14081 for that and intend to improve it.

llvm-svn: 165886
2012-10-13 22:30:41 +00:00
Richard Smith e30752c93b -fcatch-undefined-behavior: emit calls to the runtime library whenever one of the checks fails.
llvm-svn: 165536
2012-10-09 19:52:38 +00:00
Micah Villmow dd31ca10ef Move TargetData to DataLayout.
llvm-svn: 165395
2012-10-08 16:25:52 +00:00
Benjamin Kramer a801f4a81d Expose __builtin_bswap16.
GCC has always supported this on PowerPC and 4.8 supports it on all platforms,
so it's a good idea to expose it in clang too. LLVM supports this on all targets.

llvm-svn: 165362
2012-10-06 14:42:22 +00:00
Bob Wilson 39d8a132df Add an FMA intrinsic for ARM Neon.
llvm-svn: 164904
2012-09-29 23:52:48 +00:00
Sylvestre Ledru 33b5baf189 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164766
llvm-svn: 164769
2012-09-27 10:16:10 +00:00
Sylvestre Ledru a876013dc9 Fix a typo 'iff' => 'if'
llvm-svn: 164766
2012-09-27 09:57:10 +00:00
Nico Weber ca496f34a2 Add codegen support for the __debugbreak intrinsic.
llvm-svn: 164660
2012-09-26 05:40:16 +00:00
Jim Grosbach 11b6fe5e9c ARM: Use a dedicated intrinsic for vector bitwise select.
The expression based expansion too often results in IR level optimizations
splitting the intermediate values into separate basic blocks, preventing
the formation of the VBSL instruction as the code author intended. In
particular, LICM would often hoist part of the computation out of a loop.

rdar://11011471

llvm-svn: 164342
2012-09-21 00:18:30 +00:00
Jim Grosbach d3608f433a Tidy up. Trailing whitespace and 80 columns.
llvm-svn: 164341
2012-09-21 00:18:27 +00:00
Richard Smith 4d1458ed38 -fcatch-undefined-behavior: Factor emission of the creation of, and branch to,
the trap BB out of the individual checks and into a common function, to prepare
for making this code call into a runtime library. Rename the existing EmitCheck
to EmitTypeCheck to clarify it and to move it out of the way of the new
EmitCheck.

llvm-svn: 163451
2012-09-08 02:08:36 +00:00
Eli Friedman 504f9a2872 Make alignment computation for pointer values for builtins handle
non-pointer types with a pointer representation correctly. PR13660.

llvm-svn: 162862
2012-08-29 21:21:11 +00:00
Eli Friedman 5d14c48dbb Attempt to fix clang bootstrap (broken by r162425).
llvm-svn: 162440
2012-08-23 11:27:56 +00:00
Eli Friedman a5dd5684dc Use the alignment from lvalue emission to more accurately compute the alignment
of a pointer for builtin emission, instead of just depending on the type of the
pointee.  <rdar://problem/11314941>.

llvm-svn: 162425
2012-08-23 03:10:17 +00:00
Fariborz Jahanian 1ac111989d irgen: inline code for several of complex builtin
calls. // rdar://8315199

llvm-svn: 161891
2012-08-14 20:09:28 +00:00
Bob Wilson 2605fef7db Avoid using i64 types for vld1q_lane/vst1q_lane intrinsics.
The backend has to legalize i64 types by splitting them into two 32-bit pieces,
which leads to poor quality code.  If we produce code for these intrinsics that
uses one-element vector types, which can live in Neon vector registers without
getting split up, then the generated code is much better.  Radar 11998303.

llvm-svn: 161879
2012-08-14 17:27:04 +00:00
Hal Finkel 3fadbb54fd Add __builtin_readcyclecounter() to produce the @llvm.readcyclecounter() intrinsic.
llvm-svn: 161310
2012-08-05 22:03:08 +00:00
Joel Jones 682150364a More replacing of target-dependent intrinsics with target-indepdent
intrinsics.  The second instruction(s) to be handled are the vector versions 
of count set bits (ctpop).

The changes here are to clang so that it generates a target independent 
vector ctpop when it sees an ARM dependent vector bits set count.  The changes 
in llvm are to match the target independent vector ctpop and in 
VMCore/AutoUpgrade.cpp to update any existing bc files containing ARM 
dependent vector pop counts with target-independent ctpops.  There are also 
changes to an existing test case in llvm for ARM vector count instructions and 
to a test for the bitcode upgrade.

<rdar://problem/11892519>

There is deliberately no test for the change to clang, as so far as I know, no
consensus has been reached regarding how to test neon instructions in clang;
q.v. <rdar://problem/8762292>

llvm-svn: 160409
2012-07-18 00:01:03 +00:00
Simon Atanasyan 94a6d863a9 Revert commit r160308. We decide to move builtins selection to the backend.
llvm-svn: 160353
2012-07-17 08:15:06 +00:00
Simon Atanasyan a06d06b660 MIPS: Implement __builtin_mips_shll_qb builtin function overloading.
This function has two versions. The first one is used for a register operand.
The second one is used for an immediate number.

llvm-svn: 160308
2012-07-16 18:52:02 +00:00
Eric Christopher 934a1c0231 Capitalize comment.
llvm-svn: 160220
2012-07-14 19:29:12 +00:00
Joel Jones 3e00e9d5c1 This is one of the first steps at moving to replace target-dependent
intrinsics with target-indepdent intrinsics.  The first instruction(s) to be 
handled are the vector versions of count leading zeros (ctlz).

The changes here are to clang so that it generates a target independent 
vector ctlz when it sees an ARM dependent vector ctlz.  The changes in llvm 
are to match the target independent vector ctlz and in VMCore/AutoUpgrade.cpp 
to update any existing bc files containing ARM dependent vector ctlzs with 
target-independent ctlzs.  There are also changes to an existing test case in 
llvm for ARM vector count instructions and a new test for the bitcode upgrade.

<rdar://problem/11831778>

There is deliberately no test for the change to clang, as so far as I know, no
consensus has been reached regarding how to test neon instructions in clang;
q.v. <rdar://problem/8762292>

llvm-svn: 160201
2012-07-13 23:26:27 +00:00
Benjamin Kramer a43b6999ff Add _rdrand{16,32,64}_step intrinsics to immintrin.h
llvm-svn: 160118
2012-07-12 09:33:03 +00:00
John McCall 8dda7b27ee Distinguish more carefully between free functions and C++ instance methods
in the ABI arrangement, and leave a hook behind so that we can easily
tweak CCs on platforms that use different CCs by default for C++
instance methods.

llvm-svn: 159894
2012-07-07 06:41:13 +00:00
Benjamin Kramer 46a72fb741 Dead code eliminate the massive hexagon builtin intrinsic supporting code.
The tablegen'd code does the same thing without this egregious duplication.
In my limited testing everything seems to work, however there can be
differences if the clang and llvm builtin definitions don't match.

llvm-svn: 159371
2012-06-28 20:08:55 +00:00
Benjamin Kramer 8652ca8a6a Now that we use the GCC builtin <-> llvm intrinsic, dead code eliminate the handwritten emitter.
The generated code uncovered an invalid prototype for __builtin_mips_shilo, fix it along the way.

llvm-svn: 159368
2012-06-28 19:10:01 +00:00
Simon Atanasyan 07ce7d8fb5 Support MIPS DSP Rev1 intrinsics.
This patch was reviewed in the llvm-commits list by Jim Grosbach.

llvm-svn: 159366
2012-06-28 18:23:16 +00:00
Richard Smith 01ade177e9 If the first argument of __builtin_object_size can be folded to a constant
pointer, but such folding encounters side-effects, ignore the side-effects
rather than performing them at runtime: CodeGen generates wrong code for
__builtin_object_size in that case.

llvm-svn: 157310
2012-05-23 04:13:20 +00:00
Nuno Lopes 2b1ff46ed1 revert the usage of the objectsize intrinsic with 3 parameters (to match LLVM r157255)
llvm-svn: 157256
2012-05-22 15:26:48 +00:00
Sirish Pande 84dce5d0c2 Hexagon V5 intrinsics support in clang.
llvm-svn: 156630
2012-05-11 19:39:08 +00:00
Nuno Lopes ddcce0bb90 update calls to objectsize intrinsic to match LLVM r156473
add a test for -fbounds-checking code generation

llvm-svn: 156474
2012-05-09 15:53:34 +00:00
Craig Topper c83dff0993 Convert AVX non-temporal store builtins to LLVM-native IR. This was previously done for SSE builtins.
llvm-svn: 156296
2012-05-07 06:25:45 +00:00
Chandler Carruth 70ac923ebc Revert r155363, due to the underlying patches in LLVM causing regression
test suite failures.

llvm-svn: 155371
2012-04-23 18:25:40 +00:00
Sirish Pande 7039d0eaee Hexagon V5 (floating point) support in cfe.
llvm-svn: 155363
2012-04-23 17:48:57 +00:00
Chandler Carruth b8ae76037a Revert some Hexagon builtin commits to match reverts done to LLVM in
r155047. See the LLVM log for the primary motivation:
  http://llvm.org/viewvc/llvm-project?rev=155047&view=rev

Primary commit r154828:
  - Several issues were raised in review, and fixed in subsequent
    commits.
  - Follow-up commits also reverted, and which should be folded into the
    original before reposting:
    - r154837: Re-add the 'undef BUILTIN' thing to fix the build.
    - r154928: Fix build warnings, re-add (and correct) header and
      license
    - r154937: Typo fix.

Please resubmit this patch with the relevant LLVM resubmission.

llvm-svn: 155048
2012-04-18 21:32:25 +00:00
Sirish Pande f02eebef2a Hexagon V5(Floating Point) support.
llvm-svn: 154828
2012-04-16 17:04:05 +00:00
Richard Smith 01ba47d7b6 Implement the missing pieces needed to support libstdc++4.7's <atomic>:
__atomic_test_and_set, __atomic_clear, plus a pile of undocumented __GCC_*
predefined macros.

Implement library fallback for __atomic_is_lock_free and
__c11_atomic_is_lock_free, and implement __atomic_always_lock_free.

Contrary to their documentation, GCC's __atomic_fetch_add family don't
multiply the operand by sizeof(T) when operating on a pointer type.
libstdc++ relies on this quirk. Remove this handling for all but the
__c11_atomic_fetch_add and __c11_atomic_fetch_sub builtins.

Contrary to their documentation, __atomic_test_and_set and __atomic_clear
take a first argument of type 'volatile void *', not 'void *' or 'bool *',
and __atomic_is_lock_free and __atomic_always_lock_free have an argument
of type 'const volatile void *', not 'void *'.

With this change, libstdc++4.7's <atomic> passes libc++'s atomic test suite,
except for a couple of libstdc++ bugs and some cases where libc++'s test
suite tests for properties which implementations have latitude to vary.

llvm-svn: 154640
2012-04-13 00:45:38 +00:00
Richard Smith b1e36c662b Provide, and document, a set of __c11_atomic_* intrinsics to implement C11's
<stdatomic.h> header.

In passing, fix LanguageExtensions to note that C11 and C++11 are no longer
"upcoming standards" but are now actually standardized.

llvm-svn: 154513
2012-04-11 17:55:32 +00:00
Eli Friedman fefe0d07ea Don't try to create "store atomic" instructions of non-integer types; they aren't supported at the moment. PR12040.
llvm-svn: 152891
2012-03-16 01:48:04 +00:00
James Molloy 4813fc8ed6 Fix codegen for vld{3,4}_dup intrinsics.
Patch by Silviu Baranga!

llvm-svn: 152788
2012-03-15 09:12:01 +00:00
Chris Lattner aaa18fad7d add a testcase for PR12094 and fix a crash on pointer to incomplete type,
reported by Richard Smith.

llvm-svn: 151993
2012-03-04 00:52:12 +00:00
Jay Foad b0f3344b10 PR12094: Set the alignment of memory intrinsic instructions based on the
types of the pointer arguments.

llvm-svn: 151927
2012-03-02 18:34:30 +00:00
Bill Wendling f1a3fcac0d Use an ArrayRef when we can instead of passing in a SmallVectorImpl reference.
llvm-svn: 151150
2012-02-22 09:30:11 +00:00
Chandler Carruth a2a5410e6d Add 3dNOW intrinsic header to x86intrin.h, conditioned on __3dNOW__ to
match the behavior of GCC. Also add a test for these intrinsics, which
apparently have *zero* tests. =[ Not surprisingly, Clang crashed when
compiling these.

Fix the bug in CodeGen where we failed to bitcast the argument type to
x86mmx prior to calling the LLVM intrinsic. This fixes an assert on the
new 3dnow-builtins.c test.

This is one issue impacting the efforts to get Clang to emulate the
Microsoft intrinsics headers -- 3dnow intrinsics are implictitly made
available there.

llvm-svn: 150948
2012-02-20 07:35:45 +00:00
Chris Lattner ece0409a1a simplify a bunch of code to use the well-known LLVM IR types computed by CodeGenModule.
llvm-svn: 149943
2012-02-07 00:39:47 +00:00
Bob Wilson 49708d41a6 Preserve alignment for Neon vld1_lane/dup and vst1_lane intrinsics.
We had been generating load/store instructions with the default alignment
for the vector element type, even when the pointer argument had less alignment.
<rdar://problem/10538555>

llvm-svn: 149794
2012-02-04 23:58:08 +00:00
Craig Topper 2962e7b656 Remove long dead code for handling vector shift by immediate builtins.
llvm-svn: 149237
2012-01-30 08:51:36 +00:00
Craig Topper 2c82f9ecf8 Remove custom handling for cmpsd/cmpss/cmppd/cmpps builtins. The builtins are now in IntrinsicsX86.td.
llvm-svn: 149235
2012-01-30 08:38:42 +00:00
Craig Topper d6d3a05b4f Cleanup 3dnow builtin handling. Most of them were already handled by LLVM connecting intrinsics and builtins in IntrinsicsX86.td.
llvm-svn: 149233
2012-01-30 08:18:19 +00:00
Benjamin Kramer 1412816686 Make the __builtin_c[lt]zs builtins target independent.
There is really no reason to have these only available on x86. It's
just __builtin_c[tl]z for shorts.

Modernize the test while at it.

llvm-svn: 149183
2012-01-28 18:42:57 +00:00
Bob Wilson a7a61e2701 Make clz/ctz builtins defined for zero on ARM targets. rdar://10732455
ARM supports clz and ctz directly and both operations have well-defined
results for zero.  There is no disadvantage in performance to using the
defined-at-zero versions of llvm.ctlz/cttz intrinsics.  We're running into
ARM-specific code written with the assumption that __builtin_clz(0) == 32,
even though that value is technically undefined.  The code is failing now
because of llvm optimizations that are taking advantage of the undef
behavior (specifically svn r147255).  There's nothing wrong with that
optimization on x86 where any incorrect assumptions about __builtin_clz(0)
will quickly be exposed.  For ARM, though, optimizations based on that undef
behavior are likely to cause subtle bugs.  Other targets with defined-at-zero
clz/ctz support may want to override the default behavior as well.

llvm-svn: 149086
2012-01-26 22:14:27 +00:00
Chris Lattner 2d6b7b91b9 reapply r148902:
"use the new ConstantVector::getSplat method where it makes sense."

Also simplify a bunch of code to use the Builder->getInt32 instead
of doing it the hard and ugly way.  Much more progress could be made
here, but I don't plan to do it.

llvm-svn: 148926
2012-01-25 05:34:41 +00:00
Argyrios Kyrtzidis 5a25297c5e Revert 148902 which was part of 148901 which was reverted in r148906.
Original log:
 use the new ConstantVector::getSplat method where it makes sense.

llvm-svn: 148907
2012-01-25 02:58:12 +00:00
Chris Lattner c558d7d176 use the new ConstantVector::getSplat method where it makes sense.
llvm-svn: 148902
2012-01-25 02:06:10 +00:00
David Blaikie e4d798f078 More dead code removal (using -Wunreachable-code)
llvm-svn: 148577
2012-01-20 21:50:17 +00:00
Eli Friedman 65499b45f0 Add __builtin_labs and __builtin_llabs, to complete the set of __builtin_*abs. Patch by Ruben Van Boxem.
llvm-svn: 148340
2012-01-17 22:11:30 +00:00
David Blaikie f47fa304a4 Remove unnecessary default cases in switches over enums.
This allows -Wswitch-enum to find switches that need updating when these enums are modified.

llvm-svn: 148281
2012-01-17 02:30:50 +00:00
Eli Friedman df88c54f8d Revert r147655; it's breaking the compiler_rt build on OSX.
llvm-svn: 147677
2012-01-06 20:03:09 +00:00
David Chisnall 9217435a33 If we are compiling with -fno-builtin then don't do constant folding of
builtins.

This fixes PR11711.

llvm-svn: 147655
2012-01-06 12:20:19 +00:00
Richard Smith 5fab0c9e1a Small refactoring and simplification of constant evaluation and some of its
clients. No functionality change.

llvm-svn: 147318
2011-12-28 19:48:30 +00:00
Craig Topper f2855ade2b Add intrinsics for lzcnt and tzcnt instructions.
llvm-svn: 147263
2011-12-25 06:25:37 +00:00
Craig Topper 94aba2c260 More AVX2 intrinsic support including saturating add/sub and palignr.
llvm-svn: 146857
2011-12-19 07:03:25 +00:00
Tony Linthicum 76329bf83f Hexagon backend support
llvm-svn: 146413
2011-12-12 21:14:55 +00:00
Chandler Carruth a31b95cacf Update Clang to emit the new form of llvm.cttz and llvm.ctlz intrinsics,
setting the is_zero_undef flag appropriately to true as that matches the
semantics of these GCC builtins.

This is the Clang side of r146357 in LLVM.

llvm-svn: 146358
2011-12-12 04:28:35 +00:00
NAKAMURA Takumi dabda6b839 lib/CodeGen/CGBuiltin.cpp: Tweak the identifier "Type" to appease msvc.
llvm-svn: 144065
2011-11-08 03:27:04 +00:00
Bob Wilson 98bc98caa8 Clean up type flags for overloaded Neon builtins. No functional change.
This patch just adds a simple NeonTypeFlags class to replace the various
hardcoded constants that had been used until now.  Unfortunately I couldn't
figure out a good way to avoid duplicating that class between clang and
TableGen, but since it's small and rarely changes, that's not so bad.

llvm-svn: 144054
2011-11-08 01:16:11 +00:00
Richard Smith 7b553f1b19 Rename Expr::Evaluate to Expr::EvaluateAsRValue to make it clear that it will
implicitly perform an lvalue-to-rvalue conversion if used on an lvalue
expression. Also improve the documentation of Expr::Evaluate* to indicate which
of them will accept expressions with side-effects.

llvm-svn: 143263
2011-10-29 00:50:52 +00:00
Eli Friedman df14b3a837 Initial implementation of __atomic_* (everything except __atomic_is_lock_free).
llvm-svn: 141632
2011-10-11 02:20:01 +00:00
Richard Smith caf3390d44 Constant expression evaluation refactoring:
- Remodel Expr::EvaluateAsInt to behave like the other EvaluateAs* functions,
   and add Expr::EvaluateKnownConstInt to capture the current fold-or-assert
   behaviour.
 - Factor out evaluation of bitfield bit widths.
 - Fix a few places which would evaluate an expression twice: once to determine
   whether it is a constant expression, then again to get the value.

llvm-svn: 141561
2011-10-10 18:28:20 +00:00
Eli Friedman c8b57f6683 llvm.memory.barrier is going away; remove the wrapper intrinsic __builtin_llvm_memory_barrier.
__atomic_thread_fence will be landing soon as a replacement, wrapping around the new fence instruction.

llvm-svn: 141332
2011-10-06 23:12:03 +00:00
Benjamin Kramer 76399eb2ad de-tmpify clang.
llvm-svn: 140637
2011-09-27 21:06:10 +00:00
David Blaikie 83d382b1ca Switch assert(0/false) llvm_unreachable.
llvm-svn: 140367
2011-09-23 05:06:16 +00:00
Eli Friedman 2dadd3ebee Fix comment.
llvm-svn: 139678
2011-09-14 00:52:45 +00:00
John McCall 30e4efd458 Correctly generate IR for casted "builtin" functions, where
the builtin is really just a predefined declaration.  These are
totally valid to cast.

llvm-svn: 139657
2011-09-13 23:05:03 +00:00
Eli Friedman 84d2812111 Re-commit r139643.
Make clang use Acquire loads and Release stores where necessary.

llvm-svn: 139650
2011-09-13 22:21:56 +00:00
Eli Friedman acca089617 Revert r139643 while I look into it; it's breaking selfhost.
llvm-svn: 139648
2011-09-13 22:08:16 +00:00
Eli Friedman f92b2e0714 Make clang use Acquire loads and Release stores where necessary.
llvm-svn: 139643
2011-09-13 21:31:32 +00:00
Julien Lerouge e0d5fad37b Remove trailing } in comment.
llvm-svn: 139424
2011-09-09 22:46:39 +00:00
Julien Lerouge 5a6b6987dc Bring llvm.annotation* intrinsics support back to where it was in llvm-gcc: can
annotate global, local variables, struct fields, or arbitrary statements (using
the __builtin_annotation), rdar://8037476.

llvm-svn: 139423
2011-09-09 22:41:49 +00:00
Eli Friedman e9f8113ec4 Switch clang over to using fence/atomicrmw/cmpxchg instead of the intrinsics (which will go away). LLVM CodeGen does almost exactly the same thing with these and the old intrinsics, so I'm reasonably confident this will not break anything.
There are still a few issues which need to be resolved with code generation for atomic load and store, so I'm not converting the places which need those for now.

I'm not entirely sure what to do about __builtin_llvm_memory_barrier: the fence instruction doesn't expose all the possibilities which can be expressed by __builtin_llvm_memory_barrier.  I would appreciate hearing from anyone who is using this intrinsic.

llvm-svn: 139216
2011-09-07 01:41:24 +00:00
Ted Kremenek c14efa7122 Fix a handful of dead stores found by Clang's static analyzer. There's a bunch of others I haven't touched.
llvm-svn: 137867
2011-08-17 21:04:19 +00:00
Bob Wilson 445c24f8f0 Move handling of vget_lane/vset_lane before the code that checks the type.
Unlike most of the other Neon intrinsics, these are not overloaded and do not
have the extra argument that specifies the vector type.  This has not been
fatal because the lane number operand is supposed to be an ICE and so that
value has harmlessly been used as the type identifier.  Radar 9901281.

llvm-svn: 137550
2011-08-13 05:03:46 +00:00
Jay Foad 5709f7c5f7 Remove some unnecessary single element array temporaries.
llvm-svn: 136461
2011-07-29 13:56:53 +00:00
Frits van Bommel ede0dc6dda Shorten some expressions by using ArrayRef::slice().
llvm-svn: 135910
2011-07-25 15:13:01 +00:00
Chris Lattner 0e62c1cc0b remove unneeded llvm:: namespace qualifiers on some core types now that LLVM.h imports
them into the clang namespace.

llvm-svn: 135852
2011-07-23 10:55:15 +00:00
Frits van Bommel 717d7edd3e Migrate LLVM and Clang to use the new makeArrayRef(...) functions where previously explicit non-default constructors were used.
Mostly mechanical with some manual reformatting.

llvm-svn: 135390
2011-07-18 12:00:32 +00:00
Chris Lattner 2192fe50da de-constify llvm::Type, patch by David Blaikie!
llvm-svn: 135370
2011-07-18 04:24:23 +00:00
Jay Foad 5bd375a6cc Convert CallInst and InvokeInst APIs to use ArrayRef.
llvm-svn: 135265
2011-07-15 08:37:34 +00:00
Benjamin Kramer 8d375cef55 Change intrinsic getter to take an ArrayRef, now that the underlying function in LLVM does.
llvm-svn: 135155
2011-07-14 17:45:50 +00:00
Chris Lattner a5f58b05e8 clang side to match the LLVM IR type system rewrite patch.
llvm-svn: 134831
2011-07-09 17:41:47 +00:00
Jakub Staszak d2cf2cbae9 Introduce __builtin_expect() intrinsic support.
llvm-svn: 134761
2011-07-08 22:45:14 +00:00
Cameron Zwarich ae7bc98710 Add codegen support for the fma/fmal/fmaf builtins.
llvm-svn: 134743
2011-07-08 21:39:34 +00:00
Bob Wilson 6bc2164d2a Revert "Shorten some ARM builtin names by removing unnecessary "neon" prefix."
Sorry, this was a bad idea.  Within clang these builtins are in a separate
"ARM" namespace, but the actual builtin names should clearly distinguish tha
they are target specific.

llvm-svn: 133833
2011-06-24 22:13:26 +00:00
Bob Wilson 932e5b5d52 Shorten some ARM builtin names by removing unnecessary "neon" prefix.
llvm-svn: 133826
2011-06-24 21:32:46 +00:00
Chris Lattner 845511fe1c update for api change.
llvm-svn: 133365
2011-06-18 22:49:11 +00:00
Bruno Cardoso Lopes 3b0297a98c Update the prefetch intrinsic usage. Now the last argument tells codegen
whether it's a data or instruction cache access.

llvm-svn: 132977
2011-06-14 05:00:30 +00:00
Benjamin Kramer df1fb13a5c Eliminate temporary argument vectors.
llvm-svn: 132260
2011-05-28 14:26:31 +00:00
Bruno Cardoso Lopes fe73374d7a Add support for ARM ldrexd/strexd builtins
llvm-svn: 132249
2011-05-28 04:11:33 +00:00
Bill Wendling bb455154a1 Remove the 'unaligned load' builtins now that they're no longer used in the *mmintrin.h files.
llvm-svn: 131300
2011-05-13 18:52:28 +00:00
Bill Wendling e106c34817 LLVM doesn't always optimize away the four loads from this:
(__m128){ p[0], p[1], p[2], p[3] }

which produces really bad code. This could be done in instcombine, but it's
probably better to do it in the front-end instead.
<rdar://problem/9424836>

llvm-svn: 131237
2011-05-12 19:02:15 +00:00
Bill Wendling 6869b6abf8 Simplification noticed by Chris.
llvm-svn: 130864
2011-05-04 20:28:12 +00:00
Bill Wendling 5f9150b5b1 Convert the non-temporal store builtins to LLVM-native IR.
llvm-svn: 130830
2011-05-04 02:40:38 +00:00
Fariborz Jahanian 24ac1599fc Generalize case for built-in expressions having
side-effect to generate their ir. Not just for
__builtin_expect. // rdar://9330105

llvm-svn: 130172
2011-04-25 23:10:07 +00:00
Fariborz Jahanian 5a866c0bf2 Ir-gen the side-effect(s) when __builtin_expect is
constant-folded. // rdar://9330105

llvm-svn: 130163
2011-04-25 22:30:02 +00:00
Chris Lattner 54fd1a1ad3 fix a crash on code that uses the result value of __builtin___memcpy_chk.
llvm-svn: 129892
2011-04-20 23:14:50 +00:00
Chris Lattner 30107ed600 fold memcpy/set/move_chk to llvm.memcpy/set/move when the sizes
are trivial.  This exposes opportunities earlier, and allows fastisel
to do good things with these at -O0.

This addresses rdar://9289468 - clang doesn't fold memset_chk at -O0

llvm-svn: 129651
2011-04-17 00:40:24 +00:00
Michael J. Spencer 6826eb816a Add 3DNow! Intrinsics.
llvm-svn: 129570
2011-04-15 15:07:13 +00:00
Bill Wendling a865185ad6 Removing the unaligned load tests from builtins-x86.c since they're generated by a regular 'load' now.
llvm-svn: 129464
2011-04-13 20:17:22 +00:00
Bill Wendling 88ae43772a It looks like the FreeBSD buildbot needs this for the builtins-x86.c test.
llvm-svn: 129433
2011-04-13 10:02:54 +00:00
Bill Wendling b9c9e34cb3 Just use a native "load" instead of translating the builtin later. Clang can
take it!

I wasn't able to get __builtin_ia32_loaddqu to transform into an unaligned
load...I'll have to look into it further.

llvm-svn: 129427
2011-04-13 05:58:17 +00:00
Bill Wendling 3137d3cb49 Convert the unaligned load builtins to the first-class versions.
llvm-svn: 129420
2011-04-13 00:36:37 +00:00
Chris Lattner 9cb59fa834 add a __sync_swap builtin to fill out the rest of the __sync builtins.
Patch by Dave Zarzycki!

llvm-svn: 129189
2011-04-09 03:57:26 +00:00
Matt Beaumont-Gay 873c6dd875 Oops, prefer C-style cast here
llvm-svn: 128607
2011-03-31 01:56:27 +00:00
Matt Beaumont-Gay a25fce8e9e Silence GCC warning about differing types on the branches of a conditional expression
llvm-svn: 128605
2011-03-31 01:43:22 +00:00
Bob Wilson 7201af3914 Use intrinsics for Neon vmull operations. Radar 9208957.
llvm-svn: 128590
2011-03-31 00:09:00 +00:00
Jay Foad 20c0f02cc5 Remove PHINode::reserveOperandSpace(). Instead, add a parameter to
PHINode::Create() giving the (known or expected) number of operands.

llvm-svn: 128538
2011-03-30 11:28:58 +00:00
Jay Foad 27e20c3c58 (Almost) always call reserveOperandSpace() on newly created PHINodes.
llvm-svn: 128534
2011-03-30 11:19:06 +00:00
Eli Friedman b4d3c99929 Make sure we aggressively attach nounwind (etc.) to calls to library
functions of the form __builtin_XXX.

llvm-svn: 128198
2011-03-24 05:09:45 +00:00
Eric Christopher cf5e83b471 __clear_cache() is varargs and people will occasionally write it without
arguments. Process only the arguments that people write, but process
all of them.

Fixes rdar://8900346

llvm-svn: 127616
2011-03-14 20:30:34 +00:00
Chris Lattner 91c08ad14a update for ConstantVector API change.
llvm-svn: 125538
2011-02-15 00:14:06 +00:00
Chris Lattner dd68bd0a65 revert my ConstantVector patch, it seems to have made the llvm-gcc
builders unhappy.

llvm-svn: 125505
2011-02-14 18:16:09 +00:00
Chris Lattner 2d9a7672db update for ConstantVector::get API change.
llvm-svn: 125488
2011-02-14 07:55:40 +00:00
John McCall ad7c5c1657 Reorganize CodeGen{Function,Module} to eliminate the unfortunate
Block{Function,Module} base class.  Minor other refactorings.

Fixed a few address-space bugs while I was there.

llvm-svn: 125085
2011-02-08 08:22:06 +00:00
Ted Kremenek 582a0999fb Null initialize a few variables flagged by
clang's -Wuninitialized-experimental warning.
While these don't look like real bugs, clang's
-Wuninitialized-experimental analysis is stricter
than GCC's, and these fixes have the benefit
of being general nice cleanups.

llvm-svn: 124072
2011-01-23 17:04:59 +00:00
John McCall 20f6ab828a Fix a latent bug where, after emitting an expression statement, we would
delete the block we began emitting into if it had no predecessors.  We never
want to do this, because there are several valid cases during statement
emission where an existing block has no known predecessors but will acquire
some later.  The case in my test case doesn't inherently fall into this 
category, because we could safely emit the case-range code before the statement
body, but there are examples with labels that can't be fallen into 
that would also demonstrate this bug.

rdar://problem/8837067

llvm-svn: 123303
2011-01-12 03:41:02 +00:00
Benjamin Kramer 39f987ffd0 Make a helper function static.
llvm-svn: 123118
2011-01-09 13:21:33 +00:00
Benjamin Kramer acc6b4e2fd Simplify mem{cpy, move, set} creation with IRBuilder.
llvm-svn: 122634
2010-12-30 00:13:21 +00:00
Bob Wilson 63fbbc6ef8 Implement builtins for Neon half-precision float conversions.
Also tweak the VCVT_F32_F16 entry in arm_neon.td to be more consistent with
the other floating-point conversion builtins.  Radar 8068427.

llvm-svn: 121916
2010-12-15 23:36:44 +00:00
Bob Wilson 546b691c73 Add missing switch case for the quad-register version of the Neon vmul builtin.
llvm-svn: 121595
2010-12-10 23:09:09 +00:00
Bob Wilson 0348af667a Fix clang crashes on Neon vld[234]_dup intrinsics with 64-bit element types.
The 64-bit element vectors need to be handled as a special case.

llvm-svn: 121592
2010-12-10 22:54:58 +00:00
Bob Wilson 4c4a00a10b Add missing switch case to handle builtin for Neon vqnegq.
llvm-svn: 121468
2010-12-10 06:26:19 +00:00
Bob Wilson d1767c5c15 LLVM's intrinsics for vpaddl and vpadal have 2 overloaded types.
Clang was only specifying the overloaded result type.  PR8483.

llvm-svn: 121464
2010-12-10 05:51:07 +00:00
Bob Wilson 571c907cdf Neon compare absolute LLVM intrinsics are not overloaded. PR8484.
llvm-svn: 121447
2010-12-10 01:11:38 +00:00
Bob Wilson 482afae812 Stop using builtins for the "_lane" variants of saturating multiply intrinsics.
Remove the "splat" parameter from the EmitNeonCall function, since it is no
longer needed.

llvm-svn: 121300
2010-12-08 22:37:56 +00:00
Bob Wilson b038120094 Stop using clang builtins for Neon vabdl and vabal intrinsics.
llvm-svn: 121288
2010-12-08 21:39:47 +00:00
Bob Wilson 7d66df9c33 Stop using clang builtins for Neon vaba intrinsics.
llvm-svn: 121277
2010-12-08 20:09:54 +00:00
Chandler Carruth 8005adf031 Silence an unused variable warning.
llvm-svn: 121221
2010-12-08 01:29:17 +00:00
Bob Wilson 8811c3a72e Stop using clang builtins for Neon vadd[lw] and vsub[lw] intrinsics.
llvm-svn: 121214
2010-12-08 00:14:43 +00:00
Bob Wilson f81b09db68 Stop using clang builtins for Neon vmlal{_n,_lane} and vmlsl{_n,_lane}.
llvm-svn: 121210
2010-12-07 23:54:55 +00:00
Bob Wilson 210f6ddecc Stop using a clang builtin for Neon vdup_lane intrinsics.
llvm-svn: 121191
2010-12-07 22:40:02 +00:00
Bob Wilson 7f3c0aa96f Stop using a clang builtin for Neon vmull_lane intrinsic.
llvm-svn: 121189
2010-12-07 22:03:46 +00:00
Bob Wilson 160fdf49e4 Add a missing parameter, without which clang crashes for vqshlu_n intrinsics.
llvm-svn: 121188
2010-12-07 22:03:43 +00:00
Bob Wilson 7795599f4b Add support for vmul_p8 Neon intrinsic. Radar 8446141.
llvm-svn: 120812
2010-12-03 17:29:39 +00:00
Bob Wilson 4fa993fc51 Add a separate rightShift flag instead of reusing the existing "poly" variable
to distinguish vsri/vsli.

llvm-svn: 120806
2010-12-03 17:10:22 +00:00
John McCall 3a7f6926d1 Restore r117403 (fixing IR gen for bool atomics), this time being less
aggressive about the form we expect bools to be in.  I don't really have
time to fix all the sources right now.

llvm-svn: 117486
2010-10-27 20:58:56 +00:00
Rafael Espindola 9d798a07e4 Revert r117403 as it caused PR8480.
llvm-svn: 117456
2010-10-27 17:13:49 +00:00
John McCall 6bde954f47 Extract procedures to do scalar-to-memory and memory-to-scalar conversions
in IR gen, and use those to fix a correctness issue with bool atomic
intrinsics.  rdar://problem/8461234

llvm-svn: 117403
2010-10-26 22:09:15 +00:00
Argyrios Kyrtzidis 073c9cb592 Implement __builtin_ia32_vec_ext_v2si function (required by Qt).
llvm-svn: 116162
2010-10-10 03:19:11 +00:00
Bill Wendling 65b2a965fb Add target implementations for the X86 builtins:
__builtin_ia32_vec_init_v8qi
  __builtin_ia32_vec_init_v4hi
  __builtin_ia32_vec_init_v2si

They are lowered to bitcasts. (These are all ready tested by the gcc testsuite.)
<rdar://problem/8529957>

llvm-svn: 116147
2010-10-09 08:47:25 +00:00
Chris Lattner 64d7f2a014 when expanding a builtin, if the argument is required to be a constant,
force it to be a constant instead of emitting with EmitScalarExpr.  In
-ftrapv mode, they are not the same.

This fixes rdar://8478728 + PR8221

llvm-svn: 115388
2010-10-02 00:09:12 +00:00
Chris Lattner 07e96866a2 tidy
llvm-svn: 115383
2010-10-01 23:43:16 +00:00
Bill Wendling 11191f11b8 Accidentally committed some temporary changes on my branch when reverting patches.
llvm-svn: 114936
2010-09-28 01:28:56 +00:00
Bill Wendling 6d8c442e08 Temporarily revert 114929 114925 114924 114921. It looked like they (or at least
one of them) was causing a series of failures:

http://google1.osuosl.org:8011/builders/clang-x86_64-darwin10-selfhost/builds/4518

svn merge -c -114929 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114929 into '.':
U    include/clang/Sema/Sema.h
U    include/clang/AST/DeclCXX.h
U    lib/Sema/SemaDeclCXX.cpp
U    lib/Sema/SemaTemplateInstantiateDecl.cpp
U    lib/Sema/SemaDecl.cpp
U    lib/Sema/SemaTemplateInstantiate.cpp
U    lib/AST/DeclCXX.cpp
svn merge -c -114925 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114925 into '.':
G    include/clang/AST/DeclCXX.h
G    lib/Sema/SemaDeclCXX.cpp
G    lib/AST/DeclCXX.cpp
svn merge -c -114924 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114924 into '.':
G    include/clang/AST/DeclCXX.h
G    lib/Sema/SemaDeclCXX.cpp
G    lib/Sema/SemaDecl.cpp
G    lib/AST/DeclCXX.cpp
U    lib/AST/ASTContext.cpp
svn merge -c -114921 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114921 into '.':
G    include/clang/AST/DeclCXX.h
G    lib/Sema/SemaDeclCXX.cpp
G    lib/Sema/SemaDecl.cpp
G    lib/AST/DeclCXX.cpp

llvm-svn: 114933
2010-09-28 01:09:49 +00:00
Bill Wendling 1308667f18 Revert my patch changing the MMX "shift" intrinsics that take immediates into
"shift with non-immediate" intrinsics. It gets here because we they aren't
immediates anymore.

llvm-svn: 114890
2010-09-27 21:22:25 +00:00
Chris Lattner b2f659b7a0 fix the rest of rdar://8461279 - clang miscompiles address-space qualified atomics
llvm-svn: 114503
2010-09-21 23:40:48 +00:00
Chris Lattner c9066d3072 same bug as before, this time with __sync_val_compare_and_swap.
llvm-svn: 114502
2010-09-21 23:35:30 +00:00
Chris Lattner 7cf46bfda0 fix __sync_bool_compare_and_swap to work with address-space qualified types.
llvm-svn: 114498
2010-09-21 23:24:52 +00:00
Bill Wendling d632616f86 The MMX shift-with-immediate builtins require the equivalent
shift-with-immediate LLVM intrinsics.

llvm-svn: 114239
2010-09-17 23:46:16 +00:00
Bob Wilson 6061b05d51 Translate NEON vabdl, vaba, and vabal builtins to be implemented using the
vabd intrinsic combined with zext and add operations.

llvm-svn: 112937
2010-09-03 01:27:09 +00:00
Bob Wilson 5b4904f7a3 Add a bunch of missing bitcasts for clang NEON builtin expansions.
Radar 8388233

llvm-svn: 112890
2010-09-02 22:37:30 +00:00
Bob Wilson 1b87c9a646 Translate NEON vmull, vmlal, and vmlsl builtins to llvm multiply-add/sub
with zext/sext operations, instead of to llvm intrinsics.  I have a plan to
avoid the clang builtins for these, but it is going to take a little longer
and I want to get the NEON intrinsics updated before the 2.8 release.

llvm-svn: 112764
2010-09-01 23:20:27 +00:00
Bob Wilson b9225f7f85 Translate NEON vmovn builtin to a vector truncation instead of using an llvm
intrinsic.

llvm-svn: 112504
2010-08-30 19:57:13 +00:00
Bob Wilson 0e7a398936 Translate NEON vaddl, vaddw, vsubl, and vsubw builtins to llvm add/sub
with zext/sext operations, instead of to llvm intrinsics.  (We can also
get rid of the clang builtins and handle these entirely in the arm_neon.h
header if there is a way to express vector sext/zext in C.)

llvm-svn: 112413
2010-08-29 05:14:28 +00:00
Bob Wilson 7b0d032d0c Add the new alignment arguments for NEON load/store intrinsics, based on the
types of the pointer address expressions used with those intrinsics.

llvm-svn: 112272
2010-08-27 17:14:29 +00:00
Daniel Dunbar e3d87d21f3 IRgen/NEON: Fix codegen of vzip and vzipq.
- Will be adding an executable test case to test-suite repo.

llvm-svn: 112126
2010-08-26 00:55:57 +00:00
Bob Wilson b02244969d Translate NEON vmovl intrinsics to zero/sign-extend operations.
llvm-svn: 111612
2010-08-20 03:36:08 +00:00
Nate Begeman ad5dd42817 vdup_lane was missing
<rdar://problem/8278732>

llvm-svn: 110420
2010-08-06 01:24:57 +00:00
Nate Begeman f568b074db Add support for VFP status & control operations for ARM.
llvm-svn: 110153
2010-08-03 21:32:34 +00:00
Nate Begeman 1194bd2bd8 Wire up sema checking for __builtin_arm_usat and __builtin_arm_ssat immediates.
llvm-svn: 109814
2010-07-29 22:48:34 +00:00
Fariborz Jahanian 0ebca28f1d 2nd argument of __builtin_expect must be evaluated
if it hs side-effect to matchgcc's behaviour.
Addresses radar 8172109.

llvm-svn: 109467
2010-07-26 23:11:03 +00:00
Chandler Carruth bc8cab16c5 Improve the representation of the atomic builtins in a few ways. First, we make
their call expressions synthetically have the "deduced" types based on their
first argument. We only insert conversions in the AST for arguments whose
values require conversion to match the value type expected. This keeps PR7600
closed by maintaining the return type, but avoids assertions due to unexpected
implicit casts making the type unsigned (test case added from Daniel).

The magic is moved into the codegen for the atomic builtin which inserts the
casts as needed at the IR level to raise the type to an integer suitable for
the LLVM intrinsic. This shouldn't cause any real change in functionality, but
now we can make the builtin be more truly polymorphic.

llvm-svn: 108638
2010-07-18 07:23:17 +00:00
Chris Lattner 5e016ae983 finally get around to doing a significant cleanup to irgen:
have CGF create and make accessible standard int32,int64 and 
intptr types.  This fixes a ton of 80 column violations 
introduced by LLVMContextification and cleans up stuff a lot.

llvm-svn: 106977
2010-06-27 07:15:29 +00:00
Nate Begeman ed48c857dc Implement remaining codegen for NEON, all operations should now work.
llvm-svn: 106407
2010-06-20 23:05:28 +00:00
Anton Korobeynikov cc50b7d7d5 More AltiVec support.
Patch by Anton Yartsev!

llvm-svn: 106387
2010-06-19 09:47:18 +00:00
Nate Begeman dbafec1f3e Remove last of the bool shifts for MS VC++, patch by dimitry andric
llvm-svn: 106206
2010-06-17 02:26:59 +00:00
Fariborz Jahanian 4a30307840 Fixed conflict between objc_memmove_collectable builtin
decl. and one ddefined in darwin header file.

llvm-svn: 106107
2010-06-16 16:22:04 +00:00
Fariborz Jahanian 021510e96f Patch adds support for copying of those
objective-c++ class objects which have GC'able objc object
pointers and need to use ObjC's objc_memmove_collectable
API (radar 8070772). 

llvm-svn: 106061
2010-06-15 22:44:06 +00:00
Benjamin Kramer 7039fcbc5d An implementation of __builtin__fpclassify the way Chris Lattner described by Jörg Blank.
llvm-svn: 105936
2010-06-14 10:30:41 +00:00
Nate Begeman 91e1feab7a Add some missing shifts
Fix multiplies by scalar
Add SemaChecking code for all immediates
Add SemaChecking-gen support to arm_neon.td

llvm-svn: 105930
2010-06-14 05:21:25 +00:00
Nate Begeman d773fe67dd Most of NEON sema checking & fix to polynomial type detection
llvm-svn: 105908
2010-06-13 04:47:52 +00:00
Nate Begeman c6ac0ce89f Shifts complete. Only vld & sema checking of constants remain.
llvm-svn: 105879
2010-06-12 06:06:07 +00:00
Nate Begeman dd715805ab vbsl, vrev* is implemented via arm_neon.h
llvm-svn: 105875
2010-06-12 03:11:41 +00:00
Nate Begeman 8ed060b95a Most of remaining builtins, 2 generics, vld, and rounding shfits remain.
llvm-svn: 105848
2010-06-11 22:57:12 +00:00
Nate Begeman e0935ffa50 Multiplies, some shifts, set_lane
llvm-svn: 105793
2010-06-10 18:11:55 +00:00
Nate Begeman 4a04b467d9 support _lane ops, and multiplies by scalar.
llvm-svn: 105770
2010-06-10 00:17:56 +00:00
Nate Begeman d90aa43bdf Implement codegen for hadd, hsub, max, min, mlal, movl, movn, padal, mov_n
Make note about how to handle the dozen or so multiply by scalar ops.

llvm-svn: 105734
2010-06-09 18:04:15 +00:00
Nate Begeman 4307a25545 More accurate BuiltinsARM.def
vget_lane support

llvm-svn: 105684
2010-06-09 05:30:26 +00:00
Rafael Espindola 6bb986d530 Simplify the code a bit and avoid a gcc waring about uninitialized variables.
llvm-svn: 105676
2010-06-09 03:48:40 +00:00
Nate Begeman 5548309fa7 Implement transpose/zip/unzip & table lookup.
Test out some basic constant-checking.

llvm-svn: 105667
2010-06-09 01:10:23 +00:00
Nate Begeman ae6b1d8010 Fix NEON intrinsic argument passing, support vext. Most now successfully make it through codegen to the .s file
llvm-svn: 105599
2010-06-08 06:03:01 +00:00
Rafael Espindola 895e51de4a Fix what looks like a merge problem that broke __clear_cache.
llvm-svn: 105595
2010-06-08 03:52:53 +00:00
Nate Begeman 16372afeab Implement ARM NEON up through vcvt, alphabetically.
llvm-svn: 105590
2010-06-08 00:17:19 +00:00
Rafael Espindola a54062ef0c Implement __clear_cache on ARM.
llvm-svn: 105537
2010-06-07 17:26:50 +00:00
Nate Begeman 5968eb270a weekend checkpoint of arm neon builtins codegen.
TODO: add remainder of builtins to CGBuiltin, add code to SemaChecking to validate constants.

llvm-svn: 105532
2010-06-07 16:01:56 +00:00
Dan Gohman ed0347333e This cast is no longer needed; the FIXME is fixed.
llvm-svn: 104919
2010-05-28 01:45:35 +00:00
Jim Grosbach 4cf59b9e91 Update __builtin_setjmp codegen to match llvmCore changes in r104900.
llvm-svn: 104902
2010-05-27 23:54:20 +00:00
John McCall 02269a66b3 Enable the implementation of __builtin_setjmp and __builtin_longjmp. Not all
LLVM backends support these yet.

llvm-svn: 104867
2010-05-27 18:47:06 +00:00
Benjamin Kramer fdb61d78e9 Implement codegen for __builtin_isnormal.
llvm-svn: 104118
2010-05-19 11:24:26 +00:00
Chris Lattner 3628326b44 add todos for isinf_sign and isnormal, which I don't intend to implement
in the near future.

llvm-svn: 103169
2010-05-06 06:13:53 +00:00
Chris Lattner dbff4bf5f4 implement codegen support for __builtin_isfinite, part of PR6083
llvm-svn: 103168
2010-05-06 06:04:13 +00:00
Chris Lattner 43660c5bc0 implement part of PR6083: codegen support for isinf. Like isnan,
this is generating correct but suboptimal (extra extend to double)
code for the float case.  Will investigate next.

llvm-svn: 103166
2010-05-06 05:35:16 +00:00
Eric Christopher 1bbc7086ff Rewrite handling of 64-bit palignr intrinsics to be vector shuffles.
Stop multiplying constant by 8 accordingly in the header and change
intrinsic definition for what types we expect.

Add to existing palignr test to check that we're emitting the correct things.

llvm-svn: 101332
2010-04-15 01:43:08 +00:00
Chris Lattner dad4062b4d implement altivec.h and a bunch of support code, patch by Anton Yartsev!
llvm-svn: 101215
2010-04-14 03:54:58 +00:00
John McCall 8586bfd85d @llvm.sqrt isn't really close enough to C's sqrt to justify emitting calls
to the intrinsic, even when math-errno is off.

Fixes rdar://problem/7828230 by falling back on the library function.

llvm-svn: 100613
2010-04-07 08:20:20 +00:00
Mon P Wang cc2ab0cdc9 Reapply patch for adding support for address spaces and added a isVolatile field to memcpy, memmove, and memset.
llvm-svn: 100305
2010-04-04 03:10:52 +00:00
Mon P Wang f7f3bff646 Revert r100193 since it causes failures in objc in clang
llvm-svn: 100200
2010-04-02 18:43:42 +00:00
Mon P Wang 4b82a88764 Reapply patch for adding support for address spaces and added a isVolatile field to memcpy, memmove, and memset.
llvm-svn: 100193
2010-04-02 18:04:30 +00:00
Bob Wilson adb58e32cc Revert Mon Ping's 99930 due to broken llvm-gcc buildbots.
llvm-svn: 99949
2010-03-30 22:28:46 +00:00
Mon P Wang 231e99743a Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset
llvm-svn: 99930
2010-03-30 21:02:45 +00:00
Daniel Dunbar 3f540c0d7d Remove support for nand atomic builtins. They are inconsistently implemented in
gcc, and the common expectation seems to be that they are unused. If and when
someone cares we can add them back with well documented demantics.

llvm-svn: 99522
2010-03-25 17:13:09 +00:00
Daniel Dunbar 4ff562d557 IRgen: Wrap atomic intrinsics with memory barriers, to ensure we honor the semantics.
- This should be conservatively correct, we eventually should have target hooks for platforms that are less strict.

llvm-svn: 99050
2010-03-20 07:04:11 +00:00
Eli Friedman 99d20f83ba PR6515: Implement __builtin_signbit and friends.
I'm reasonably sure my implementation is correct, but it would be nice if
someone could double-check.

llvm-svn: 97864
2010-03-06 02:17:52 +00:00
John McCall beec5a080f Implement __builtin_dwarf_sp_column for i386 (Darwin and not), x86-64 (all),
and ARM.  Implement __builtin_init_dwarf_reg_size_table for i386 (both) and
x86-64 (all).

llvm-svn: 97859
2010-03-06 00:35:14 +00:00
John McCall 731be6620c Revert changes r97693, r97700, and r97718.
Our testing framework can't deal with disabled targets yet.

llvm-svn: 97719
2010-03-04 04:29:44 +00:00
John McCall 81d4d12504 Implement __builtin_dwarf_sp_column().
llvm-svn: 97700
2010-03-04 00:44:01 +00:00
Chris Lattner 5cc15e058b add framework for ARM builtins, Patch by Edmund Grimley Evans!
llvm-svn: 97656
2010-03-03 19:03:45 +00:00
John McCall 515c3c548c Sketch out an implementation for __builtin_dwarf_cfa. I have no idea
why the front-end is calculating the argument to llvm.eh.dwarf.cfa().

llvm-svn: 97653
2010-03-03 10:30:05 +00:00
John McCall 66769f8544 Implement __builtin_eh_return.
llvm-svn: 97643
2010-03-03 05:38:58 +00:00
John McCall d4f4b7f5ee Add proper target hooks for __builtin_extract_return_address and
__builtin_frob_return_address.  The implementations for both are
still trivial in the default case.

llvm-svn: 97638
2010-03-03 04:15:11 +00:00
John McCall b6cc2c0439 Inspired by seeing "MIPS" go by in the commits, I've gone ahead and
implemented a (codegen) target hook for __builtin_extend_pointer.
I'm also making it return a uint64_t instead of an unsigned word;  this
comports with typical usage (i.e. the one use I know of).

I don't know if any of the existing targets requires this hook to be
set (other than x86 and x86_64, which I know do not).

llvm-svn: 97547
2010-03-02 03:50:12 +00:00
John McCall 4b613fae35 After much consultation aimed at figuring out what this builtin actually
does, document the results and then implement __builtin_extend_pointer for
platforms where it's a no-op.

llvm-svn: 97540
2010-03-02 02:31:24 +00:00
Daniel Dunbar a7566f163a IRgen: Add CreateMemTemp, for creating an temporary memory object for a particular type, and flood fill. - CreateMemTemp sets the alignment on the alloca correctly, which fixes a great many places in IRgen where we were doing the wrong thing.
- This fixes many many more places than the test case, but my feeling is we need to audit alignment systematically so I'm not inclined to try hard to test the individual fixes in this patch. If this bothers you, patches welcome!

PR6240.

llvm-svn: 95648
2010-02-09 02:48:28 +00:00
Daniel Dunbar 8848175547 IRgen: Fix some CreateTempAlloca calls to use ConvertTypeForMem when that is
conceptually correct. Review appreciated (Chris, Eli, Anders).

llvm-svn: 95401
2010-02-05 18:56:49 +00:00
Eli Friedman d6ef69a7db Add bzero builtin; this should help codegen quality for code using this
function.

llvm-svn: 94320
2010-01-23 19:00:10 +00:00
David Chisnall 481e3a87fe Created __builtin___NSStringMakeConstantString() builtin, which generates constant Objective-C strings.
llvm-svn: 94274
2010-01-23 02:40:42 +00:00