The ``disable_tail_calls`` attribute instructs the backend to not
perform tail call optimization inside the marked function.
For example,
int callee(int);
int foo(int a) __attribute__((disable_tail_calls)) {
return callee(a); // This call is not tail-call optimized.
}
Note that this attribute is different from 'not_tail_called', which
prevents tail-call optimization to the marked function.
rdar://problem/8973573
Differential Revision: http://reviews.llvm.org/D12547
llvm-svn: 252986
In r244063, I had caused these builtins to call the same-named library
functions, __atomic_*_fetch_SIZE. However, this was incorrect: while
those functions are in fact supported by GCC's libatomic, they're not
documented by the spec (and gcc doesn't ever call them).
Instead, you're /supposed/ to call the __atomic_fetch_* builtins and
then redo the operation inline to return the final value.
Differential Revision: http://reviews.llvm.org/D14385
llvm-svn: 252920
The C++ spec (3.6.1.3) says "The function `main` shall not be used within a program". This implies that it cannot recurse, so add the norecurse attribute to help the midend out a bit.
llvm-svn: 252902
target features that the caller function doesn't provide. This matches
the existing backend failure to inline functions that don't have
matching target features - and diagnoses earlier in the case of
always_inline.
Fix up a few test cases that were, in fact, invalid if you tried
to generate code from the backend with the specified target features
and add a couple of tests to illustrate what's going on.
This should fix PR25246.
llvm-svn: 252834
This is about how we handle static member of a template. Before this commit,
we use internal linkage for the IR thread-local variable, which is inefficient.
With this commit, we will start to follow Itanium C++ ABI.
rdar://problem/23415206
Reviewed by John McCall.
llvm-svn: 252814
We used to emit the store prior to branch in the entry block. To make it more
efficient, this commit moves it to the init block. We still mark as initialized
before initializing anything else.
llvm-svn: 252777
This comes up when a derived class destructor is equivalent to a base
class destructor defined in the same TU, and we try to alias them.
A COFF weak alias cannot satisfy a normal undefined symbol reference
from another TU. The other TU must also mark the referenced symbol as
weak, and we can't rely on that.
Clang already has a special case here for dllexport, but we failed to
realize that the problem also applies to other non-discardable symbols
such as those from explicit template instantiations.
Fixes PR25477.
llvm-svn: 252659
When a struct's size is not a power of 2, the corresponding _Atomic() type is
promoted to the nearest. We already correctly handled normal C++ expressions of
this form, but direct calls to the __c11_atomic_whatever builtins ended up
performing dodgy operations on the smaller non-atomic types (e.g. memcpy too
much). Later optimisations removed this as undefined behaviour.
This patch converts EmitAtomicExpr to allocate its temporaries at the full
atomic width, sidestepping the issue.
llvm-svn: 252507
The -meabi flag to control LLVM EABI version.
Without '-meabi' or with '-meabi default' imply LLVM triple default.
With '-meabi gnu' sets EABI GNU.
With '-meabi 4' or '-meabi 5' set EABI version 4 and 5 respectively.
A similar patch was introduced in LLVM.
Patch by Vinicius Tinti.
llvm-svn: 252463
This attribute is used to prevent tail-call optimizations to the marked
function. For example, in the following piece of code, foo1 will not be
tail-call optimized:
int __attribute__((not_tail_called)) foo1(int);
int foo2(int a) {
return foo1(a); // Tail-call optimization is not performed.
}
The attribute has effect only on statically bound calls. It has no
effect on indirect calls. Also, virtual functions and objective-c
methods cannot be marked as 'not_tail_called'.
rdar://problem/22667622
Differential Revision: http://reviews.llvm.org/D12922
llvm-svn: 252369
Function__builtin_signbit returns wrong value for type ppcf128 on big endian
machines. This patch fixes how value is generated in that case.
Patch by Aleksandar Beserminji.
Differential Revision: http://reviews.llvm.org/D14149
llvm-svn: 252307
Summary: This fixes a bug that's easily encountered in LLDB
(https://llvm.org/bugs/show_bug.cgi?id=22875). The problem here is that we
mangle a name during debug info emission, but never actually emit the actual
Decl, so we run into problems in EmitDeclMetadata (which assumes such a Decl
exists). Fix that by just skipping metadata emissions for mangled names that
don't have associated Decls.
Reviewers: rjmccall
Subscribers: labath, cfe-commits
Differential Revision: http://reviews.llvm.org/D13959
llvm-svn: 252229
for the root cause. The 'using llvm::isa;' declaration in Basic/LLVM.h only
pulls the declarations of llvm::isa that were declared prior to it into
namespace clang. In a modules build, this is a hermetic set of just the
declarations from LLVM. In a non-modules build, we happened to also pull the
declaration from lib/CodeGen/Address.h into namespace clang, which made the
code in question accidentally compile.
llvm-svn: 252211
This new builtin template allows for incredibly fast instantiations of
templates like std::integer_sequence.
Performance numbers follow:
My work station has 64 GB of ram + 20 Xeon Cores at 2.8 GHz.
__make_integer_seq<std::integer_sequence, int, 90000> takes 0.25
seconds.
std::make_integer_sequence<int, 90000> takes unbound time, it is still
running. Clang is consuming gigabytes of memory.
Differential Revision: http://reviews.llvm.org/D13786
llvm-svn: 252036
A 'readonly' Objective-C property declared in the primary class can
effectively be shadowed by a 'readwrite' property declared within an
extension of that class, so long as the types and attributes of the
two property declarations are compatible.
Previously, this functionality was implemented by back-patching the
original 'readonly' property to make it 'readwrite', destroying source
information and causing some hideously redundant, incorrect
code. Simplify the implementation to express how this should actually
be modeled: as a separate property declaration in the extension that
shadows (via the name lookup rules) the declaration in the primary
class. While here, correct some broken Fix-Its, eliminate a pile of
redundant code, clean up the ARC migrator's handling of properties
declared in extensions, and fix debug info's naming of methods that
come from categories.
A wonderous side effect of doing this write is that it eliminates the
"AddedObjCPropertyInClassExtension" method from the AST mutation
listener, which in turn eliminates the last place where we rewrite
entire declarations in a chained PCH file or a module file. This
change (which fixes rdar://problem/18475765) will allow us to
eliminate the rewritten-decls logic from the serialization library,
and fixes a crash (rdar://problem/23247794) illustrated by the
test/PCH/chain-categories.m example.
llvm-svn: 251874
Certain CXXConstructExpr nodes require zero-initialization before a
constructor is called. We had a bug in the case where the constructor
is called on a virtual base: we zero-initialized the base's vbptr field.
A complementary bug is present in MSVC where no zero-initialization
occurs for the subobject at all.
This fixes PR25370.
llvm-svn: 251783
attributes to internal functions.
This patch fixes CodeGenModule::CreateGlobalInitOrDestructFunction to
use SetInternalFunctionAttributes instead of SetLLVMFunctionAttributes
to attach function attributes to internal functions.
Also, make sure the correct CGFunctionInfo is passed instead of always
passing what arrangeNullaryFunction returns.
rdar://problem/20828324
Differential Revision: http://reviews.llvm.org/D13610
llvm-svn: 251734
This sets the mostly expected Darwin default ABI options for these two
platforms. Active changes from these defaults for watchOS are in a later patch.
llvm-svn: 251708
This works around PR25162. The MSVC tables make it very difficult to
correctly inline a C++ destructor that contains try / catch. We've
attempted to address PR25162 in LLVM's backend, but it feels pretty
infeasible. MSVC and ICC both appear to avoid inlining such complex
destructors.
Long term, we want to fix this by making the inliner smart enough to
know when it is inlining into a cleanup, so it can inline simple
destructors (~unique_ptr and ~vector) while avoiding destructors
containing try / catch.
llvm-svn: 251576
GCC uses the x87DoubleExtended model for long doubles, and passes them
indirectly by address through function calls.
Also replace the existing mingw-long-double assembly emitting test with
an IR-level test.
llvm-svn: 251567
functions.
This commit fixes a bug in CGOpenMPRuntime.cpp and CGObjC.cpp where
some of the function attributes are not attached to newly created
functions.
rdar://problem/20828324
Differential Revision: http://reviews.llvm.org/D13928
llvm-svn: 251476
Linking options for particular file depend on the option that specifies the file.
Currently there are two:
* -mlink-bitcode-file links in complete content of the specified file.
* -mlink-cuda-bitcode links in only the symbols needed by current TU.
Linked symbols are internalized. This bitcode linking mode is used to
link device-specific bitcode provided by CUDA.
Files are linked in order they are specified on command line.
-mlink-cuda-bitcode replaces -fcuda-uses-libdevice flag.
Differential Revision: http://reviews.llvm.org/D13913
llvm-svn: 251427
only one of a group of possibilities.
This changes the syntax in the builtin files to represent:
, as the and operator
| as the or operator
The former syntax matches how the backend tablegen files represent
multiple subtarget features being required.
Updated the builtin and intrinsic headers accordingly for the new
syntax.
llvm-svn: 251388
The MCU psABI calling convention is somewhat, but not quite, like -mregparm 3.
In particular, the rules involving structs are different.
Differential Revision: http://reviews.llvm.org/D13978
llvm-svn: 251224
Previously, __weak was silently accepted and ignored in MRC mode.
That makes this a potentially source-breaking change that we have to
roll out cautiously. Accordingly, for the time being, actual support
for __weak references in MRC is experimental, and the compiler will
reject attempts to actually form such references. The intent is to
eventually enable the feature by default in all non-GC modes.
(It is, of course, incompatible with ObjC GC's interpretation of
__weak.)
If you like, you can enable this feature with
-Xclang -fobjc-weak
but like any -Xclang option, this option may be removed at any point,
e.g. if/when it is eventually enabled by default.
This patch also enables the use of the ARC __unsafe_unretained qualifier
in MRC. Unlike __weak, this is being enabled immediately. Since
variables are essentially __unsafe_unretained by default in MRC,
the only practical uses are (1) communication and (2) changing the
default behavior of by-value block capture.
As an implementation matter, this means that the ObjC ownership
qualifiers may appear in any ObjC language mode, and so this patch
removes a number of checks for getLangOpts().ObjCAutoRefCount
that were guarding the processing of these qualifiers. I don't
expect this to be a significant drain on performance; it may even
be faster to just check for these qualifiers directly on a type
(since it's probably in a register anyway) than to do N dependent
loads to grab the LangOptions.
rdar://9674298
llvm-svn: 251041
We got this right for Itanium but not MSVC because CGRecordLayoutBuilder
was checking if the base's size was zero when it should have been
checking the non-virtual size.
This fixes PR21040.
llvm-svn: 251036
This is almost entirely a matter of just flipping a switch. 99% of
the runtime support is available all the way back to when it was
implemented in the non-fragile runtime, i.e. in Lion. However,
fragile runtimes do not recognize ARC-style ivar layout strings,
which means that accessing __strong or __weak ivars reflectively
(e.g. via object_setIvar) will end up accessing the ivar as if it
were __unsafe_unretained. Therefore, when using reflective
technologies like KVC, be sure that your paths always refer to a
property.
rdar://23209307
llvm-svn: 250955
Specifically, handle under-aligned object references (by explicitly
ignoring them, because this just isn't representable in the format;
yes, this means that GC silently ignores such references), descend
into anonymous structs and unions, stop classifying fields of
pointer-to-strong/weak type as strong/weak in ARC mode, and emit
skips to cover the entirety of block layouts in GC mode. As a
cleanup, extract this code into a helper class, avoid a number of
unnecessary copies and layout queries, generate skips implicitly
instead of explicitly tracking them, and clarify the bitmap-creation
logic.
llvm-svn: 250919
Summary: It breaks the build for the ASTMatchers
Subscribers: klimek, cfe-commits
Differential Revision: http://reviews.llvm.org/D13893
llvm-svn: 250827
Currently debug info for types used in explicit cast only is not emitted. It happened after a patch for better alignment handling. This patch fixes this bug.
Differential Revision: http://reviews.llvm.org/D13582
llvm-svn: 250795
The Intel MCU psABI requires floating-point values to be passed in-reg.
This makes the x86-32 ABI code respect "-mfloat-abi soft" and generate float inreg arguments.
Differential Revision: http://reviews.llvm.org/D13554
llvm-svn: 250689
match the feature set of the function that they're being called from.
This ensures that we can effectively diagnose some[1] code that would
instead ICE in the backend with a failure to select message.
Example:
__m128d foo(__m128d a, __m128d b) {
return __builtin_ia32_addsubps(b, a);
}
compiled for normal x86_64 via:
clang -target x86_64-linux-gnu -c
would fail to compile in the back end because the normal subtarget
features for x86_64 only include sse2 and the builtin requires sse3.
[1] We're still not erroring on:
__m128i bar(__m128i const *p) { return _mm_lddqu_si128(p); }
where we should fail and error on an always_inline function being
inlined into a function that doesn't support the subtarget features
required.
llvm-svn: 250473
This recommits r250398 with fixes to the tests for bot failures.
Add "-target x86_64-unknown-linux" to the clang invocations that
check for the gold plugin.
llvm-svn: 250455
Rolling this back for now since there are a couple of bot failures on
the new tests I added, and I won't have a chance to look at them in detail
until later this afternoon. I think the new tests need some restrictions on
having the gold plugin available.
This reverts commit r250398.
llvm-svn: 250402
Summary:
Add clang support for -flto=thin option, which is used to set the
EmitFunctionSummary code gen option on compiles.
Add -flto=full as an alias to the existing -flto.
Add tests to check for proper overriding of -flto variants on the
command line, and convert grep tests to FileCheck.
Reviewers: dexonsmith, joker.eph
Subscribers: davidxl, cfe-commits
Differential Revision: http://reviews.llvm.org/D11908
llvm-svn: 250398
Add support for the `-fdebug-prefix-map=` option as in GCC. The syntax is
`-fdebug-prefix-map=OLD=NEW`. When compiling files from a path beginning with
OLD, change the debug info to indicate the path as start with NEW. This is
particularly helpful if you are preprocessing in one path and compiling in
another (e.g. for a build cluster with distcc).
Note that the linearity of the implementation is not as terrible as it may seem.
This is normally done once per file with an expectation that the map will be
small (1-2) entries, making this roughly linear in the number of input paths.
Addresses PR24619.
llvm-svn: 250094
CGBlocks.cpp.
This commit fixes a bug in clang's code-gen where it creates the
following functions but doesn't attach function attributes to them:
__copy_helper_block_
__destroy_helper_block_
__Block_byref_object_copy_
__Block_byref_object_dispose_
rdar://problem/20828324
Differential Revision: http://reviews.llvm.org/D13525
llvm-svn: 249735
early.
This is needed in a patch I plan to commit later, in which a null Decl
pointer is passed to SetLLVMFunctionAttributesForDefinition.
Relevant discussion is in http://reviews.llvm.org/D13525.
llvm-svn: 249722
No ABI for C++ currently makes it possible to implement the standard
100% perfectly. We wrongly hid some of our compatible behavior behind
-fms-compatibility instead of tying it to the compiler ABI.
llvm-svn: 249656
The backend restores the stack pointer after recovering from an
exception. This is similar to r245879, but it doesn't try to use the
normal cleanup mechanism, so hopefully it won't cause the same breakage.
llvm-svn: 249640
We don't have a good place to put them. Our previous spot was causing us
to optimize loads from the exception object to undef, because it was
after the catchpad instruction that models the write to the catch
object.
llvm-svn: 249616
Currently codegen crashes trying to emit casting to bool &. It happens because bool type is converted to i1 and later then lvalue for reference is converted to i1*. But when codegen tries to load this lvalue it crashes trying to load value from this i1*.
Differential Revision: http://reviews.llvm.org/D13325
llvm-svn: 249534
include/clang/CodeGenABITypes.h is in meant to be included by external
users, but using a unique_ptr on the private CodeGenModule introduces a
dependency on the type definition that prevents such a use.
NFC
llvm-svn: 249328
OpenCL v1.1 s6.2.2: for the boolean value true, every bit in the result vector should be set.
This change treats the i1 value as signed for the purposes of performing the cast to integer,
and therefore sign extend into the result.
Patch by Neil Hickey!
http://reviews.llvm.org/D13349
llvm-svn: 249301
I randomly came across this difference between AArch64 and other targets:
on the latter, we don't emit nil checks for known non-nil class method
calls thanks to r247350, but we still do for AArch64 stret calls.
They use different code paths, because those are special, as they go
through the regular msgSend, not the msgSend*_stret variants.
llvm-svn: 249205
Ensure that the vptr store in the most-derived constructor is not behind
an invariant group barrier. Previously, the base-most vptr store would
be the one behind no barrier, and that could result in the creator of
the object thinking it had the base-most vtable.
This bug caused clang call pure virtual functions when called from
constructor body.
http://reviews.llvm.org/D13373
llvm-svn: 249197
This patch implements the outlining for offloading functions for code
annotated with the OpenMP target directive. It uses a temporary naming
of the outlined functions that will have to be updated later on once
target side codegen and registration of offloading libraries is
implemented - the naming needs to be made unique in the produced
library.
llvm-svn: 249148
This reverts commit r248982 as it was breaking the ARM buildbots and the fix didn't work.
This reverts commit r248984, the fix that didn't work.
llvm-svn: 249005
Summary: __nvvm_atom_cas_* returns the old value instead of whether the swap succeeds.
Reviewers: eliben, tra
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D13306
llvm-svn: 248951
- Remove virtual SC_OpenCLWorkGroupLocal storage type specifier
as it conflicts with static local variables now and prevents
diagnosing static local address space variables correctly.
- Allow static local and global variables (OpenCL2.0 s6.8 and s6.5.1).
- Improve diagnostics of allowed ASes for variables in different scopes:
(i) Global or static local variables have to be in global
or constant ASes (OpenCL1.2 s6.5, OpenCL2.0 s6.5.1);
(ii) Non-kernel function variables can't be declared in local
or constant ASes (OpenCL1.1 s6.5.2 and s6.5.3).
http://reviews.llvm.org/D13105
llvm-svn: 248906
This is the clang commit associated with llvm r248887.
This commit changes the interface of the vld[1234], vld[234]lane, and vst[1234],
vst[234]lane ARM neon intrinsics and associates an address space with the
pointer that these intrinsics take. This changes, e.g.,
<2 x i32> @llvm.arm.neon.vld1.v2i32(i8*, i32)
to
<2 x i32> @llvm.arm.neon.vld1.v2i32.p0i8(i8*, i32)
This change ensures that address spaces are fully taken into account in the ARM
target during lowering of interleaved loads and stores.
Differential Revision: http://reviews.llvm.org/D13127
llvm-svn: 248888
Description.
If the simd clause is specified, the ordered regions encountered by any thread will use only a single SIMD lane to execute the ordered regions in the order of the loop iterations.
Restrictions.
An ordered construct with the simd clause is the only OpenMP construct that can appear in the simd region.
An ordered directive with ‘simd’ clause is generated as an outlined function and corresponding function call to prevent this part of code from vectorization later in backend.
llvm-svn: 248772
Parsing and sema analysis for 'simd' clause in 'ordered' directive.
Description
If the simd clause is specified, the ordered regions encountered by any thread will use only a single SIMD lane to execute the ordered
regions in the order of the loop iterations.
Restrictions
An ordered construct with the simd clause is the only OpenMP construct that can appear in the simd region
llvm-svn: 248696
OpenMP 4.1 extends format of '#pragma omp ordered'. It adds 3 additional clauses: 'threads', 'simd' and 'depend'.
If no clause is specified, the ordered construct behaves as if the threads clause had been specified. If the threads clause is specified, the threads in the team executing the loop region execute ordered regions sequentially in the order of the loop iterations.
The loop region to which an ordered region without any clause or with a threads clause binds must have an ordered clause without the parameter specified on the corresponding loop directive.
llvm-svn: 248569
when building a module. Clang already records the module signature when
building a skeleton CU to reference a clang module.
Matching the id in the skeleton with the one in the module allows a DWARF
consumer to verify that they found the correct version of the module
without them needing to know about the clang module format.
llvm-svn: 248345
* adds -aux-triple option to specify target triple
* propagates aux target info to AST context and Preprocessor
* pulls in target specific preprocessor macros.
* pulls in target-specific builtins from aux target.
* sets appropriate host or device attribute on builtins.
Differential Revision: http://reviews.llvm.org/D12917
llvm-svn: 248299
This commit fixes an assert that is triggered when optnone is being
added to an IR function that is already marked with minsize and optsize.
rdar://problem/22723716
Differential Revision: http://reviews.llvm.org/D13004
llvm-svn: 248191
The signature may not have been computed at the time the module reference
is generated (e.g.: in the future while emitting debug info for a clang
module). Using the full module name is safe because each clang module may
only have a single definition.
NFC.
llvm-svn: 248037
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
llvm-svn: 247941
Mingw generally wraps an old copy of msvcrt.dll which has these
personalities, so things should work out, or so I hear. I haven't tested
it.
llvm-svn: 247902
This avoids building a fake LLVM IR global variable just to ferry an i32
down into LLVM codegen. It also puts a nail in the coffin of using MS
ABI C++ EH with landingpads, since now we'll assert in the lpad code
when flags are present.
llvm-svn: 247843
ptr in dtor.
Summary:
After destruction, invocation of virtual functions prevented
by poisoning vtable pointer.
Reviewers: eugenis, kcc
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D12712
Fixed testing callback emission order to account for vptr.
Poison vtable in either complete or base dtor, depending on
if virtual bases exist. If virtual bases exist, poison in
complete dtor. Otherwise, poison in base.
Remove commented-out block.
llvm-svn: 247762
It is dangerous to do LTO on code with strict-vtable-pointers, because
one module has invariant.group.barriers, and the other one not.
In the future I want to just strip all invariant.group metadata from
vptrs loads/stores and get rid of invariant.group.barrier calls.
http://reviews.llvm.org/D12580
llvm-svn: 247724
Patch improves codegen for OpenMP constructs. If the OpenMP region does not have internal 'cancel' construct, a call to 'void __kmpc_barrier()' runtime function is generated for all implicit/explicit barriers. If the region has inner 'cancel' directive, then
```
if (__kmpc_cancel_barrier())
exit from outer construct;
```
code is generated.
Also, the code for 'canellation point' directive is not generated if parent directive does not have 'cancel' directive.
llvm-svn: 247681
These are a few cleanups I happened to have from trying to go in a
different direction recently, so just flushing them out while I have
them.
llvm-svn: 247593
This was the wrong direction to take anyway (because ultimately the
GlobalValue needed the pointee type again and /it/ used
PointerType::getElementType eventually anyway)... let's go a different way.
This reverts commit r236161.
llvm-svn: 247586
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247494
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
llvm-svn: 247465
-force-align-stack.
Also, make changes to the driver so that -mno-stack-realign is no longer
an option exposed to the end-user that disallows stack realignment in
the backend.
Differential Revision: http://reviews.llvm.org/D11815
llvm-svn: 247451
clang modules, if -dwarf-ext-refs (DebugTypesExtRefs) is specified.
This reimplements r247369 in about a third of the amount of code.
Thanks to David Blaikie pointing this out in post-commit review!
llvm-svn: 247432
When uses of personality functions were moved from LandingPadInst to
Function, we forgot to update SimplifyPersonality(). This patch corrects
that.
Note: SimplifyPersonality() is an optimization which replaces
personality functions with the default C++ personality when possible.
Without this update, some ObjC++ projects fail to link against C++
libraries (seeing as the exception ABI had effectively changed).
rdar://problem/22155434
llvm-svn: 247421
The type of a member pointer is incomplete if it has no inheritance
model. This lets us reuse more general logic already embedded in clang.
llvm-svn: 247346
It seems that there is small bug, and we can't generate assume loads
when some virtual functions have internal visibiliy
This reverts commit 982bb7d966947812d216489b3c519c9825cacbf2.
llvm-svn: 247332
Currently all variables used in OpenMP regions are captured into a record and passed to outlined functions in this record. It may result in some poor performance because of too complex analysis later in optimization passes. Patch makes to emit outlined functions for parallel-based regions with a list of captured variables. It reduces code for 2*n GEPs, stores and loads at least.
Codegen for task-based regions remains unchanged because runtime requires that all captured variables are passed in captured record.
llvm-svn: 247251
This flag causes the compiler to emit bit set entries for functions as well
as runtime bitset checks at indirect call sites. Depends on the new function
bitset mechanism.
Differential Revision: http://reviews.llvm.org/D11857
llvm-svn: 247238
Seems it broke the Polly build.
From http://lab.llvm.org:8011/builders/perf-x86_64-penryn-O3-polly-fast/builds/11687/steps/compile/logs/stdio:
In file included from /home/grosser/buildslave/perf-x86_64-penryn-O3-polly-fast/llvm.src/lib/TableGen/Record.cpp:14:0:
/home/grosser/buildslave/perf-x86_64-penryn-O3-polly-fast/llvm.src/include/llvm/TableGen/Record.h:369:3: error: looser throw specifier for 'virtual llvm::TypedInit::~TypedInit()'
/home/grosser/buildslave/perf-x86_64-penryn-O3-polly-fast/llvm.src/include/llvm/TableGen/Record.h:270:11: error: overriding 'virtual llvm::Init::~Init() noexcept (true)'
llvm-svn: 247222
Generating call assume(icmp %vtable, %global_vtable) after constructor
call for devirtualization purposes.
For more info go to:
http://lists.llvm.org/pipermail/cfe-dev/2015-July/044227.html
Edit:
Fixed version because of PR24479.
After this patch got reverted because of ScalarEvolution bug (D12719)
Merged after John McCall big patch (Added Address).
http://reviews.llvm.org/D11859
llvm-svn: 247199
We know that a reference can always be dereferenced. However, we don't
always know the number of bytes if the reference's pointee type is
incomplete. This case was correctly handled but we didn't consider the
case where the type is complete but we cannot calculate its size for ABI
specific reasons. In this specific case, a member pointer's size is
available only under certain conditions.
This fixes PR24703.
llvm-svn: 247188
Summary:
Currently clang provides no general way to generate nontemporal loads/stores.
There are some architecture specific builtins for doing so (e.g. in x86), but
there is no way to generate non-temporal store on, e.g. AArch64. This patch adds
generic builtins which are expanded to a simple store with '!nontemporal'
attribute in IR.
Differential Revision: http://reviews.llvm.org/D12313
llvm-svn: 247104
This function can be used to create a metadata identifier for a specific
type. No functionality change, but this will be used by D11857 and D12026.
Differential Revision: http://reviews.llvm.org/D12038
llvm-svn: 247098
When -fmodule-format is set to "obj", emit debug info for all types
declared in a module or referenced by a declaration into the module's
object file container.
This patch adds support for Objective-C types and methods.
llvm-svn: 247068
When -fmodule-format is set to "obj", emit debug info for all types
declared in a module or referenced by a declaration into the module's
object file container.
This patch adds support for C and C++ types.
llvm-svn: 247049
instruction used the ReturnValue as pointer operand or value operand. This
led to wrong code gen - in later stages (load-store elision code) the found
store and its operand would be erased, causing ReturnValue to become a <badref>.
The patch adds a check that makes sure that ReturnValue is a pointer operand of
store instruction. Regression test is also added.
This fixes PR24386.
Differential Revision: http://reviews.llvm.org/D12400
llvm-svn: 247003
separately from building the instruction so that it's
preserved even in -Asserts builds.
Employ C++'s mystical "comment" feature to discourage
breaking this in the future.
llvm-svn: 246991
Introduce an Address type to bundle a pointer value with an
alignment. Introduce APIs on CGBuilderTy to work with Address
values. Change core APIs on CGF/CGM to traffic in Address where
appropriate. Require alignments to be non-zero. Update a ton
of code to compute and propagate alignment information.
As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.
The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned. Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay. I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.
Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.
We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment. In particular,
field access now uses alignmentAtOffset instead of min.
Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs. For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint. That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.
ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments. In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments. That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.
I partially punted on applying this work to CGBuiltin. Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.
llvm-svn: 246985
We were crashing in CodeGen given input like this:
int self_alias(void) __attribute__((weak, alias("self_alias")));
such a self-alias is invalid, but instead of diagnosing the situation, we'd
proceed to produce IR for both the function declaration and the alias. Because
we already had a function named 'self_alias', the alias could not be named the
same thing, and so LLVM would pick a different name ('self_alias1' for example)
for that value. When we later called CodeGenModule::checkAliases, we'd look up
the IR value corresponding to the alias name, find the function declaration
instead, and then assert in a cast to llvm::GlobalAlias. The easiest way to prevent
this is simply to avoid creating the wrongly-named alias value in the first
place and issue the diagnostic there (instead of in checkAliases). We detect a
related cycle case in CodeGenModule::EmitAliasDefinition already, so this just
adds a second such check.
Even though the other test cases for this 'alias definition is part of a cycle'
diagnostic are in test/Sema/attr-alias-elf.c, I've added a separate regression
test for this case. This is because I can't add this check to
test/Sema/attr-alias-elf.c without disturbing the other test cases in that
file. In order to avoid construction of the bad IR values, this diagnostic
is emitted from within CodeGenModule::EmitAliasDefinition (and the relevant
declaration is not added to the Aliases vector). The other cycle checks are
done within the CodeGenModule::checkAliases function based on the Aliases
vector, called from CodeGenModule::Release. However, if there have been errors
earlier, HandleTranslationUnit does not call Release, and so checkAliases is
never called, and so none of the other diagnostics would be produced.
Fixes PR23509.
llvm-svn: 246882
This fixes an issue raised in D12412, where we generated invalid IR.
Thanks to Vedant Kumar for coming up with the initial work around.
Differential Revision: http://reviews.llvm.org/D12412
llvm-svn: 246880
Fix processing of shared variables with reference types in OpenMP constructs. Previously, if the variable was not marked in one of the private clauses, the reference to this variable was emitted incorrectly and caused an assertion later.
llvm-svn: 246846
Summary:
Dtor sanitization handled amidst other dtor cleanups,
between cleaning bases and fields. Sanitizer call pushed onto
stack of cleanup operations.
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D12022
Refactoring dtor sanitizing emission order.
- Support multiple inheritance by poisoning after
member destructors are invoked, and before base
class destructors are invoked.
- Poison for virtual destructor and virtual bases.
- Repress dtor aliasing when sanitizing in dtor.
- CFE test for dtor aliasing, and repression of aliasing in dtor
code generation.
- Poison members on field-by-field basis, with collective poisoning
of trivial members when possible.
- Check msan flags and existence of fields, before dtor sanitizing,
and when determining if aliasing is allowed.
- Testing sanitizing bit fields.
llvm-svn: 246815
This implements basic support for compiling (though not yet assembling
or linking) for a WebAssembly target. Note that ABI details are not yet
finalized, and may change.
Differential Revision: http://reviews.llvm.org/D12002
llvm-svn: 246814
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246764
Original commit message:
[ARM] Allow passing/returning of __fp16 arguments
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246760
The ACLE (ARM C Language Extensions) 2.0 allows the __fp16 type to be
used as a functon argument or return type (ACLE 1.1 did not).
The current public release of the AAPCS (2.09) states that __fp16 values
should be converted to single-precision before being passed or returned,
but AAPCS 2.10 (to be released shortly) changes this, so that they are
passed in the least-significant 16 bits of either a GPR (for base AAPCS)
or a single-precision register (for AAPCS-VFP). This does not change how
arguments are passed if they get passed on the stack.
This patch brings clang up to compliance with the latest versions of
both of these specs.
We can now set the __ARM_FP16_ARGS ACLE predefine, and we have always
been able to set the __ARM_FP16_FORMAT_IEEE predefine (we do not support
the alternative format).
llvm-svn: 246755
Fixed codegen for extended format of 'if' clauses with special 'directive-name-modifier' + ast-print tests for extended format of 'if' clause.
llvm-svn: 246748
every time it's called rather than attempting to cache the result.
It's unlikely to be called frequently and the overhead of using
it in the first place is already factored out.
llvm-svn: 246706
This patch depends on r246688 (D12341).
The goal is to make LLVM generate different code for these functions for a target that
has cheap branches (see PR23827 for more details):
int foo();
int normal(int x, int y, int z) {
if (x != 0 && y != 0) return foo();
return 1;
}
int crazy(int x, int y) {
if (__builtin_unpredictable(x != 0 && y != 0)) return foo();
return 1;
}
Differential Revision: http://reviews.llvm.org/D12458
llvm-svn: 246699
the main attribute and cache the results so we don't have to parse
a single attribute more than once.
This reapplies r246596 with a fix for an uninitialized class member,
and a couple of cleanups and formatting changes.
llvm-svn: 246610
GCC 4.8+ has a PowerPC-specific intrinsic, __builtin_ppc_get_timebase, to do
what Clang's __builtin_readcyclecounter does. For compatibility with code that
uses GCC's spelling (including glibc), support it as well.
Partially fixes PR23681.
llvm-svn: 246510
Also:
- Add a typedef to make working with the result easier.
- Update callers to use the new function.
- Make initFeatureMap out of line.
llvm-svn: 246468
Proper diagnostic and resolution of mangled names conflicts between C++ methods
and C functions. This patch implements support for functions/methods only;
support for variables is coming separately.
Differential Revision: http://reviews.llvm.org/D11297
llvm-svn: 246438
Added codegen for array section in 'depend' clause of 'task' directive. It emits to pointers, one for the begin of array section and another for the end of array section. Size of the section is calculated as (end + 1 - start) * sizeof(basic_element_type).
llvm-svn: 246422
This replaces the filtered generic iterator with a type-specfic one based
on dyn_cast instead of comparing the kind enum. This allows us to use
range-based for loops and eliminates casts. No functionality change
intended.
llvm-svn: 246384
This patch does two things:
1) Don't error about dllimport/export on thread-local static local variables.
We put those attributes on static locals in dllimport/export functions
implicitly in case the function gets inlined. Now, for TLS variables this
is a problem because we can't import such variables, but it's a benign
problem becase:
2) Make sure we never inline a dllimport function TLS static locals. In fact,
never inline a dllimport function that references a non-imported function
or variable (because these are not defined in the importing library). This
seems to match MSVC's behaviour.
Differential Revision: http://reviews.llvm.org/D12422
llvm-svn: 246338
Added codegen for array section in 'depend' clause of 'task' directive. It emits to pointers, one for the begin of array section and another for the end of array section. Size of the section is calculated as (end + 1 - start) * sizeof(basic_element_type).
llvm-svn: 246278
There was linker problem, and it turns out that it is not always safe
to refer to vtable. If the vtable is used, then we can refer to it
without any problem, but because we don't know when it will be used or
not, we can only check if vtable is external or it is safe to to emit it
speculativly (when class it doesn't have any inline virtual functions).
It should be fixed in the future.
http://reviews.llvm.org/D12385
llvm-svn: 246214
Usually debug info is created on the fly while during codegen.
With this API it becomes possible to create standalone debug info
for types that are not referenced by any code, such as emitting debug info
for a clang module or for implementing something like -gfull.
Because on-the-fly debug info generation may still insert retained types
on top of them, all RetainedTypes are uniqued in CGDebugInfo::finalize().
llvm-svn: 246210
A couple of changes here:
a) Do less work in the case where we don't have a target attribute on the
function. We've already canonicalized the attributes for the function -
no need to do more work.
b) Use the newer canonicalized feature adding functions from TargetInfo
to do the work when we do have a target attribute. This enables us to diagnose
some warnings in the case of conflicting written attributes (only ppc does
this today) and also make sure to get all of the features for a cpu that's
listed rather than just change the cpu.
Updated all testcases accordingly and added a new testcase to verify that we'll
error out on ppc if we have some incompatible options using the existing diagnosis
framework there.
llvm-svn: 246195
to enable the use of external type references in the debug info
(a.k.a. module debugging).
The driver expands -gmodules to "-g -fmodule-format=obj -dwarf-ext-refs"
and passes that to cc1. All this does at the moment is set a flag
codegenopts.
http://reviews.llvm.org/D11958
llvm-svn: 246192
I stared at `false /*declaration*/` for quite some time before giving up
and checking the actual function to see what it meant. Replacing with
`/* isDefinition = */ false` to save myself effort later.
llvm-svn: 246095
Eventually, we will need sample profiles to be incorporated into the
inliner's cost models. To do this, we need the sample profile pass to
be a module pass.
This patch makes no functional changes beyond the mechanical adjustments
needed to run SampleProfile as a module pass.
llvm-svn: 245941
Summary:
The signatures of the methods in LLVM for creating EH pads/rets are changing
to require token arguments on rets and assume token return type on pads.
Update creation code accordingly.
Reviewers: majnemer, rnk
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D12109
llvm-svn: 245798
Summary:
According to CUDA documentation, global variables declared with __device__,
__constant__ can be initialized from host code, so mark them as
externally initialized. Because __shared__ variables cannot have an
initialization as part of their declaration and since the value maybe kept
across different kernel invocation, the value of __shared__ is effectively
undefined instead of zero initialized.
Wrongly using zero initializer may cause illegitimate optimization, e.g.
removing unused __constant__ variable because it's not updated in the device
code and the value is initialized with zero.
Test Plan: test/CodeGenCUDA/address-spaces.cu
Patch by Xuetian Weng
Reviewers: jholewinski, eliben, tra, jingyue
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D12241
llvm-svn: 245786
We had "vcvt_f16" and "VCVT_HIGH_F16": for other FP types, this naming
is used for intrinsics with integer overloads. The FP->FP conversions,
on the other hand, use the full "vcvt_f32_f64" name instead.
Use the same naming convention for the f16<->f32 conversions.
While there, reorder the definitions a little bit.
llvm-svn: 245763
This is important in the case that the LLVM-inferred llvm-struct
alignment is not the same as the clang-known C-struct alignment.
Differential Revision: http://reviews.llvm.org/D12243
llvm-svn: 245719
Add emission of metadata for simd loops in presence of 'simdlen' clause.
If 'simdlen' clause is provided without 'safelen' clause, the vectorizer width for the loop is set to value of 'simdlen' clause + all read/write ops in loop are marked with '!llvm.mem.parallel_loop_access' metadata.
If 'simdlen' clause is provided along with 'safelen' clause, the vectorizer width for the loop is set to value of 'simdlen' clause + all read/write ops in loop are not marked with '!llvm.mem.parallel_loop_access' metadata.
If 'safelen' clause is provided without 'simdlen' clause, the vectorizer width for the loop is set to value of 'safelen' clause + all read/write ops in loop are not marked with '!llvm.mem.parallel_loop_access' metadata.
llvm-svn: 245697
Add parsing/sema analysis for 'simdlen' clause in simd directives. Also add check that if both 'safelen' and 'simdlen' clauses are specified, the value of 'simdlen' parameter is less than the value of 'safelen' parameter.
llvm-svn: 245692
OpenMP 4.1 allows to use variables with reference types in all private clauses (private, firstprivate, lastprivate, linear etc.). Patch allows to use such variables and fixes codegen for linear variables with reference types.
llvm-svn: 245268
blender uses statements expression in condition of the loop under control of the '#pragma omp parallel for'. This condition is used several times in different expressions required for codegen of the loop directive. If there are some variables defined in statement expression, it fires an assert during codegen because of redefinition of the same variables.
We have to rebuild several expression to be sure that all variables are unique.
llvm-svn: 245041
Make the copy/move ctors protected and defaulted in the base, make the
derived classes final to avoid exposing any slicing-prone APIs.
Also, while I'm here, simplify the use of buildByrefHelpers by taking
the parameter by value instead of non-const ref. None of the callers
care aobut observing the state after the call.
llvm-svn: 244990
We risk iterator invalidation issues if we use a DenseMap to hold the
backing storage for an APValue. Instead, BumpPtrAllocate them and
use APValue * as our DenseMap value.
Also, don't assume that MaterializedGlobalTemporaryMap won't regrow
between when we initially perform a lookup and later on when we actually
try to insert into it.
This fixes PR24289.
Differential Revision: http://reviews.llvm.org/D11629
llvm-svn: 244989
Summary: Poisoning applied to only class members, and before dtors for base class invoked
Implement poisoning of only class members in dtor, as opposed to also
poisoning fields inherited from base classes. Members are poisoned
only once, by the last dtor for a class. Skip poisoning if class has
no fields.
Verify emitted code for derived class with virtual destructor sanitizes
its members only once.
Removed patch file containing extraneous changes.
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D11951
Simplified test cases for use-after-dtor
Summary: Simplified test cases to focus on one feature at time.
Tests updated to align with new emission order for sanitizing
callback.
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D12003
llvm-svn: 244933
After r244870 flush() will only compare two null pointers and return,
doing nothing but wasting run time. The call is not required any more
as the stream and its SmallString are always in sync.
Thanks to David Blaikie for reviewing.
llvm-svn: 244928
Verify emitted code for derived class with virtual destructor sanitizes its members only once.
Changed emission order for dtor callback, so only the last dtor for a class emits the sanitizing callback, while ensuring that class members are poisoned before base class destructors are invoked.
Skip poisoning of members, if class has no fields.
Removed patch file containing extraneous changes.
Summary: Poisoning applied to only class members, and before dtors for base class invoked
Reviewers: eugenis, kcc
Differential Revision: http://reviews.llvm.org/D11951
llvm-svn: 244819
Summary:
float_cast_overflow is the only UBSan check without a source location attached.
This patch propagates SourceLocations where necessary to get them to the
EmitCheck() call.
Reviewers: rsmith, ABataev, rjmccall
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11757
llvm-svn: 244568
This patche and a related llvm patch solve the problem of having to explicitly enable analysis when specifying a loop hint pragma to get the diagnostics. Passing AlwasyPrint as the pass name (see below) causes the front-end to print the diagnostic if the user has specified '-Rpass-analysis' without an '=<target-pass>’. Users of loop hints can pass that compiler option without having to specify the pass and they will get diagnostics for only those loops with loop hints.
llvm-svn: 244556
Following one of the appended options will allow the loop to be vectorized. We do not include a command line option for modifying the pointer checking threshold because there is no clang-level interface for this currently.
llvm-svn: 244526
With this patch clang appends the command line options that would allow vectorization when floating-point commutativity is required. Specifically those are enabling fast-math or specifying a loop hint.
llvm-svn: 244492
These changes are for Android x86_64 targets to be compatible
with current Android g++ and conform to AMD64 ABI.
https://llvm.org/bugs/show_bug.cgi?id=23897
* Return type of long double (fp128) should be fp128, not x86_fp80.
* Vararg of long double (fp128) could be in register and overflowed to memory.
https://llvm.org/bugs/show_bug.cgi?id=24111
* Return value of long double (fp128) _Complex should be in memory like a structure of {fp128,fp128}.
Differential Revision: http://reviews.llvm.org/D11437
llvm-svn: 244468
This change adds the new unroll metadata "llvm.loop.unroll.enable" which directs
the optimizer to unroll a loop fully if the trip count is known at compile time, and
unroll partially if the trip count is not known at compile time. This differs from
"llvm.loop.unroll.full" which explicitly does not unroll a loop if the trip count is not
known at compile time
With this change "#pragma unroll" generates "llvm.loop.unroll.enable" rather than
"llvm.loop.unroll.full" metadata. This changes the semantics of "#pragma unroll" slightly
to mean "unroll aggressively (fully or partially)" rather than "unroll fully or not at all".
The motivating example for this change was some internal code with a loop marked
with "#pragma unroll" which only sometimes had a compile-time trip count depending
on template magic. When the trip count was a compile-time constant, everything works
as expected and the loop is fully unrolled. However, when the trip count was not a
compile-time constant the "#pragma unroll" explicitly disabled unrolling of the loop(!).
Removing "#pragma unroll" caused the loop to be unrolled partially which was desirable
from a performance perspective.
llvm-svn: 244467
MinGW has some pretty strange behvaior around RTTI and
dllimport/dllexport:
- RTTI data is never imported
- RTTI data is only exported if the class has no key function.
llvm-svn: 244266
OpenMP 4.1 allows to use variables with reference types in private clauses and, therefore, in init expressions of the cannonical loop forms.
llvm-svn: 244209
When a thunk is generated with a call to the original adjusted function,
the thunk appears in the debugger call stack. We want the backend to perform
tail-call optimization on the call, to make it invisible to the debugger.
This fixes PR24235
Patch by: amjad.aboud@intel.com
Differential Revision: http://reviews.llvm.org/D11476
llvm-svn: 244207
Summary:
By default, 'clang' emits dwarf and 'clang-cl' emits codeview. You can
force emission of one or both by passing -gcodeview and -gdwarf to
either driver.
Reviewers: dblaikie, hans
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11742
llvm-svn: 244097
Support for emitting libcalls for __atomic_fetch_nand and
__atomic_{add,sub,and,or,xor,nand}_fetch was missing; add it, and some
test cases.
Differential Revision: http://reviews.llvm.org/D10847
llvm-svn: 244063
a BumpPtrAllocator. This at least now handles the case where there is no
concatentation without calling memcpy on a null pointer. It might be
interesting to handle the case where everything is empty without
round-tripping through the allocator, but it wasn't clear to me if the
pointer returned is significant in any way, so I've left it in
a conservatively more-correct state.
Again, found with UBSan.
llvm-svn: 243948
Summary: In addition to checking compiler flags, the front-end also examines the attributes of the destructor definition to ensure that the SanitizeMemory attribute is attached.
Reviewers: eugenis, kcc
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D11727
refactored test into new file, revised how function attribute examined
modified test to examine default dtor with and without attribute
removed attribute check
llvm-svn: 243912
This patch fixes bug 23800 ( https://llvm.org/bugs/show_bug.cgi?id=23800#c2 ). There existed a case where the index operand from extractelement was directly used to create a shufflevector mask. Since the index can be of any integral type but the mask must only contain 32 bit integers a 64 bit index operand led to an assertion error later on.
Committed on behalf of mpflanzer (Moritz Pflanzer)
Differential Revision: http://reviews.llvm.org/D10838
llvm-svn: 243851
The new EH instructions make it possible for LLVM to generate .xdata
tables that the MSVC personality routines will be happy about. Because
this is experimental, hide it behind a -cc1 flag (-fnew-ms-eh).
Differential Revision: http://reviews.llvm.org/D11405
llvm-svn: 243767
Adjust to LLVM DIBuilder API changes in r243764, using
`createAutoVariable()` and `createParameterVariable()` in place of
`createLocalVariable()`. No real functionality change here.
llvm-svn: 243765
In llvm commit r243581, a reverse range adapter was added which allows
us to change code such as
for (auto I = Fields.rbegin(), E = Fields.rend(); I != E; ++I) {
in to
for (const FieldDecl *I : llvm::reverse(Fields))
This commit changes a few of the places in clang which are eligible to use
this new adapter.
llvm-svn: 243663
new GV (usually NAME.1) instead of the correct NAME of the old GV. Moving comdat
creation after GV replacement solves this. Patch + testcase.
Reviewed by Reid Kleckner.
http://reviews.llvm.org/D11594
llvm-svn: 243525
This will be used for old targets like Android that do not
support ELF TLS models.
Differential Revision: http://reviews.llvm.org/D10524
llvm-svn: 243441
- Use cached LLVM types
- Turn SmallVectors into Arrays/ArrayRef if the size is static
- Use ConstantInt::get's implicit splatting for vector types
No functionality change intended.
llvm-svn: 243425
This was calling FD->hasBody(), meaning "Does the function that this
decl refers to have a body?", rather than
FD->doesThisDeclarationHaveABody(), meaning "Is this decl a
non-deleted definition?".
We might want to consider renaming these APIs :/
llvm-svn: 243360
When ‘#pragma clang loop vectorize(assume_safety)’ was specified on a loop other loop hints were lost. The problem is that CGLoopInfo attaches metadata differently than EmitCondBrHints in CGStmt. For do-loops CGLoopInfo attaches metadata to the br in the body block and for while and for loops, the inc block. EmitCondBrHints on the other hand always attaches data to the br in the cond block. When specifying assume_safety CGLoopInfo emits an empty llvm.loop metadata shadowing the metadata in the cond block. Loop transformations like rotate and unswitch would then eliminate the cond block and its non-empty metadata.
This patch unifies both approaches for adding metadata and modifies the existing safety tests to include non-assume_safety loop hints.
llvm-svn: 243315
__builtin_frame_address requires its argument to be a constant
expression which already implies that it cannot have undefined behavior.
However, we used EmitScalarExpr to emit the argument causing UBSan to
try to check for overflow.
Instead, use the constant expression emission system.
This fixes PR24256.
llvm-svn: 243206
Change `getOrCreateLimitedType()` to return a `DICompositeType` and
remove the casts from its callers. Inside, I've strengthened a `cast`
from `DICompositeTypeBase`, but the casts in the callers already prove
that this is safe. There should be no functionality change here.
llvm-svn: 243155
Generating available_externally vtables for optimizations purposes.
Unfortunatelly ItaniumABI doesn't guarantee that we will be able to
refer to virtual inline method by name.
But when we don't have any inline virtual methods, and key function is
not defined in this TU, we can generate that there will be vtable and
mark it as available_externally.
This is patch will help devirtualize better.
Differential Revision: http://reviews.llvm.org/D11441
llvm-svn: 243090
The catch keyword isn't really part of a region, so it's fairly
meaningless to extend into it. This was usually harmless, but it could
crash when catch blocks involved macros in strange ways.
llvm-svn: 243066
Sometimes we can provide an initializer for static locals, in which case
we sometimes might need to change the type. Changing the type requires
making a new LLVM GlobalVariable, and in this codepath we were
forgetting to transfer the comdat.
Fixes PR23838.
Patch by Ivan Garramona.
llvm-svn: 242704
This is the PS4 counterpart to r229376, which quotes the library name if the
name contains space. It was discovered that if a library name contains both
double-quote and space characters, quoting the name might produce unexpected
results, but we are mostly concerned with a Windows host environment, which
does not allow double-quote or slashes in file/folder names.
Differential Revision: http://reviews.llvm.org/D11275
llvm-svn: 242689
- Make it a proper random access iterator with a little help from iterator_adaptor_base
- Clean up users of magic dereferencing. The iterator should behave like an Expr **.
- Make it an implementation detail of Stmt. This allows inlining of the assertions.
llvm-svn: 242608
If this assert does fire, the no-asserts behaviour is an infinite
loop. It's better to crash in this case so we get a crash report and
stop wasting the user's cpu cycles.
llvm-svn: 242591
That platform has alignof(uint64_t) == 4, but, since LLVM_ALIGNAS(...)
cannot take anything but literal integers due to MSVC limitations, the
literal '8' used there didn't match. Switch ScopeStackAlignment to
just use 8, as well.
llvm-svn: 242578
Currently, -save-temp will cause ObjCARC optimization to be dropped,
sanitizer pass to run early in the pipeline, and profiling
instrumentation to run twice.
Fix the issue by properly disable all passes in the optimization
pipeline when generating bitcode output and parse some of the Language
Options even when the input is bitcode so the passes can be setup
correctly.
llvm-svn: 242565
Some const-correctness changes snuck in here too, since they were in the
area of code I was modifying.
This seems to make Clang actually work without Bus Error on
32bit-sparc.
Follow-up patches will factor out a trailing-object helper class, to
make classes using the idiom of appending objects to other objects
easier to understand, and to ensure (with static_assert) that required
alignment guarantees continue to hold.
Differential Revision: http://reviews.llvm.org/D10272
llvm-svn: 242554
We shouldn't crash despite the AMD64 ABI not giving clear guidance as to
how to pass around vector types <= 32 bits. Instead, classify such
vectors as INTEGER to be compatible with GCC.
This fixes PR24162.
llvm-svn: 242508
- introduces a new cc1 option -fmodule-format=[raw,obj]
with 'raw' being the default
- supports arbitrary module container formats that libclang is agnostic to
- adds the format to the module hash to avoid collisions
- splits the old PCHContainerOperations into PCHContainerWriter and
a PCHContainerReader.
Thanks to Richard Smith for reviewing this patch!
llvm-svn: 242499
We now use the sanitizer special case list to decide which types to blacklist.
We also support a special blacklist entry for types with a uuid attribute,
which are generally COM types whose virtual tables are defined externally.
Differential Revision: http://reviews.llvm.org/D11096
llvm-svn: 242286
This patch corresponds to review:
http://reviews.llvm.org/D11184
A number of new interfaces for altivec.h (as mandated by the ABI):
vector float vec_cpsgn(vector float, vector float)
vector double vec_cpsgn(vector double, vector double)
vector double vec_or(vector bool long long, vector double)
vector double vec_or(vector double, vector bool long long)
vector double vec_re(vector double)
vector signed char vec_cntlz(vector signed char)
vector unsigned char vec_cntlz(vector unsigned char)
vector short vec_cntlz(vector short)
vector unsigned short vec_cntlz(vector unsigned short)
vector int vec_cntlz(vector int)
vector unsigned int vec_cntlz(vector unsigned int)
vector signed long long vec_cntlz(vector signed long long)
vector unsigned long long vec_cntlz(vector unsigned long long)
vector signed char vec_nand(vector bool signed char, vector signed char)
vector signed char vec_nand(vector signed char, vector bool signed char)
vector signed char vec_nand(vector signed char, vector signed char)
vector unsigned char vec_nand(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector unsigned char)
vector short vec_nand(vector bool short, vector short)
vector short vec_nand(vector short, vector bool short)
vector short vec_nand(vector short, vector short)
vector unsigned short vec_nand(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector unsigned short)
vector int vec_nand(vector bool int, vector int)
vector int vec_nand(vector int, vector bool int)
vector int vec_nand(vector int, vector int)
vector unsigned int vec_nand(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector unsigned int)
vector signed long long vec_nand(vector bool long long, vector signed long long)
vector signed long long vec_nand(vector signed long long, vector bool long long)
vector signed long long vec_nand(vector signed long long, vector signed long long)
vector unsigned long long vec_nand(vector bool long long, vector unsigned long long)
vector unsigned long long vec_nand(vector unsigned long long, vector bool long long)
vector unsigned long long vec_nand(vector unsigned long long, vector unsigned long long)
vector signed char vec_orc(vector bool signed char, vector signed char)
vector signed char vec_orc(vector signed char, vector bool signed char)
vector signed char vec_orc(vector signed char, vector signed char)
vector unsigned char vec_orc(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector unsigned char)
vector short vec_orc(vector bool short, vector short)
vector short vec_orc(vector short, vector bool short)
vector short vec_orc(vector short, vector short)
vector unsigned short vec_orc(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector unsigned short)
vector int vec_orc(vector bool int, vector int)
vector int vec_orc(vector int, vector bool int)
vector int vec_orc(vector int, vector int)
vector unsigned int vec_orc(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector unsigned int)
vector signed long long vec_orc(vector bool long long, vector signed long long)
vector signed long long vec_orc(vector signed long long, vector bool long long)
vector signed long long vec_orc(vector signed long long, vector signed long long)
vector unsigned long long vec_orc(vector bool long long, vector unsigned long long)
vector unsigned long long vec_orc(vector unsigned long long, vector bool long long)
vector unsigned long long vec_orc(vector unsigned long long, vector unsigned long long)
vector signed char vec_div(vector signed char, vector signed char)
vector unsigned char vec_div(vector unsigned char, vector unsigned char)
vector signed short vec_div(vector signed short, vector signed short)
vector unsigned short vec_div(vector unsigned short, vector unsigned short)
vector signed int vec_div(vector signed int, vector signed int)
vector unsigned int vec_div(vector unsigned int, vector unsigned int)
vector signed long long vec_div(vector signed long long, vector signed long long)
vector unsigned long long vec_div(vector unsigned long long, vector unsigned long long)
vector unsigned char vec_mul(vector unsigned char, vector unsigned char)
vector unsigned int vec_mul(vector unsigned int, vector unsigned int)
vector unsigned long long vec_mul(vector unsigned long long, vector unsigned long long)
vector unsigned short vec_mul(vector unsigned short, vector unsigned short)
vector signed char vec_mul(vector signed char, vector signed char)
vector signed int vec_mul(vector signed int, vector signed int)
vector signed long long vec_mul(vector signed long long, vector signed long long)
vector signed short vec_mul(vector signed short, vector signed short)
vector signed long long vec_mergeh(vector signed long long, vector signed long long)
vector signed long long vec_mergeh(vector signed long long, vector bool long long)
vector signed long long vec_mergeh(vector bool long long, vector signed long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergeh(vector bool long long, vector unsigned long long)
vector double vec_mergeh(vector double, vector double)
vector double vec_mergeh(vector double, vector bool long long)
vector double vec_mergeh(vector bool long long, vector double)
vector signed long long vec_mergel(vector signed long long, vector signed long long)
vector signed long long vec_mergel(vector signed long long, vector bool long long)
vector signed long long vec_mergel(vector bool long long, vector signed long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergel(vector bool long long, vector unsigned long long)
vector double vec_mergel(vector double, vector double)
vector double vec_mergel(vector double, vector bool long long)
vector double vec_mergel(vector bool long long, vector double)
vector signed int vec_pack(vector signed long long, vector signed long long)
vector unsigned int vec_pack(vector unsigned long long, vector unsigned long long)
vector bool int vec_pack(vector bool long long, vector bool long long)
llvm-svn: 242171
The fix is to remove duplicate copy-initialization of the only memcpy-able struct member and to correct the address of aggregately initialized members in destructors' calls during stack unwinding (in order to obtain address of struct member by using GEP instead of 'bitcast').
Differential Revision: http://reviews.llvm.org/D10990
llvm-svn: 242127
which is actually the variable backing up the llvm -time-passes command
line argument.
llvm::TimePassesIsEnabled is actually being initialized in CodeGenAction.
llvm-svn: 242099
Under the -fsanitize-memory-use-after-dtor (disabled by default) insert
an MSan runtime library call at the end of every destructor.
Patch by Naomi Musgrave.
llvm-svn: 242097
Previously, clang/llvm treated inline-asm instructions conservatively,
choosing not to eliminate the instructions or hoisting them out of a loop
even when it was safe to do so. This commit makes changes to attach a
readonly or readnone attribute to an inline-asm instruction, which enables
passes such as LICM and EarlyCSE to move or optimize away the instruction.
rdar://problem/11358192
Differential Revision: http://reviews.llvm.org/D10546
llvm-svn: 241930
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:
struct XBitfield {
unsigned b1 : 10;
unsigned b2 : 12;
unsigned b3 : 10;
};
struct YBitfield {
char x;
struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;
unsigned test7() {
// CHECK: @test7
// CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
return gbitfield.y.b2;
}
The "align 4" is actually wrong. Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.
This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp. Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.
Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).
In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage. This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.
Differential Revision: http://reviews.llvm.org/D11034
llvm-svn: 241916
Code in CGCall.cpp that loads up function arguments that need to be
coerced to a different type may in some cases ignore the fact that
the source of the argument is not naturally aligned. This may cause
incorrect code to be generated. In some places in CreateCoercedLoad,
we already have setAlignment calls to address this, but I ran into one
where it was missing, causing wrong code generation on SystemZ.
However, in that location, we do not actually know what alignment of
the source location we can rely on; the callers do not pass anything
to this routine. This is already an issue in other places in
CreateCoercedLoad; and the same problem exists for CreateCoercedStore.
To avoid pessimising code, and to fix the FIXMEs already in place,
this patch also adds an alignment argument to the CreateCoerced*
routines and uses it instead of forcing an alignment of 1. The
callers are changed to pass in the best information they have.
This actually requires changes in a number of existing test cases
since we now get better alignment in many places.
Differential Revision: http://reviews.llvm.org/D11033
llvm-svn: 241898
We were previously creating bit set entries at virtual table offset
sizeof(void*) unconditionally under the Microsoft C++ ABI. This is incorrect
if RTTI data is disabled; in that case the "address point" is at offset
0. This change modifies bit set emission to take into account whether RTTI
data is being emitted.
Also make a start on a blacklisting scheme for records.
Differential Revision: http://reviews.llvm.org/D11048
llvm-svn: 241845
Move the diagnostic back to codegen so that we can compile ATL on the
self-host bot. We don't actually end up emitting code for the __try, so
the diagnostic won't be hit.
llvm-svn: 241761
For Mips direct-to-nacl, the goal is to be close to le32 front-end and
use Mips32EL backend. This patch defines new NaClMips32ELTargetInfo and
modifies it slightly to be close to le32. It also adds necessary parts,
inline with ARM and X86.
Differential Revision: http://reviews.llvm.org/D10739
llvm-svn: 241678
The fix is to emit cleanup for arrays of memcpy-able objects in struct if an exception is thrown later during copy-construction.
Differential Revision: http://reviews.llvm.org/D10989
llvm-svn: 241670
We didn't correctly process the case where a base class is classified as
MEMORY. This would cause us to trip over an assertion.
This fixes PR24020.
Differential Revision: http://reviews.llvm.org/D10907
llvm-svn: 241667
We forgot to run postMerge after decided that the union had to be
classified as MEMORY. This left us with Lo == MEMORY and Hi == SSEUp
which is an invalid combination.
This fixes PR24021.
Differential Revision: http://reviews.llvm.org/D10908
llvm-svn: 241666
This patch adds ObjectFilePCHContainerOperations uses the LLVM backend
to put the contents of a PCH into a __clangast section inside a COFF, ELF,
or Mach-O object file container.
This is done to facilitate module debugging by makeing it possible to
store the debug info for the types defined by a module alongside the AST.
rdar://problem/20091852
llvm-svn: 241620
When messaging a method that was defined in an Objective-C class (or
category or extension thereof) that has type parameters, substitute
the type arguments for those type parameters. Similarly, substitute
into property accesses, instance variables, and other references.
This includes general infrastructure for substituting the type
arguments associated with an ObjCObject(Pointer)Type into a type
referenced within a particular context, handling all of the
substitutions required to deal with (e.g.) inheritance involving
parameterized classes. In cases where no type arguments are available
(e.g., because we're messaging via some unspecialized type, id, etc.),
we substitute in the type bounds for the type parameters instead.
Example:
@interface NSSet<T : id<NSCopying>> : NSObject <NSCopying>
- (T)firstObject;
@end
void f(NSSet<NSString *> *stringSet, NSSet *anySet) {
[stringSet firstObject]; // produces NSString*
[anySet firstObject]; // produces id<NSCopying> (the bound)
}
When substituting for the type parameters given an unspecialized
context (i.e., no specific type arguments were given), substituting
the type bounds unconditionally produces type signatures that are too
strong compared to the pre-generics signatures. Instead, use the
following rule:
- In covariant positions, such as method return types, replace type
parameters with “id” or “Class” (the latter only when the type
parameter bound is “Class” or qualified class, e.g,
“Class<NSCopying>”)
- In other positions (e.g., parameter types), replace type
parameters with their type bounds.
- When a specialized Objective-C object or object pointer type
contains a type parameter in its type arguments (e.g.,
NSArray<T>*, but not NSArray<NSString *> *), replace the entire
object/object pointer type with its unspecialized version (e.g.,
NSArray *).
llvm-svn: 241543
Produce type parameter declarations for Objective-C type parameters,
and attach lists of type parameters to Objective-C classes,
categories, forward declarations, and extensions as
appropriate. Perform semantic analysis of type bounds for type
parameters, both in isolation and across classes/categories/extensions
to ensure consistency.
Also handle (de-)serialization of Objective-C type parameter lists,
along with sundry other things one must do to add a new declaration to
Clang.
Note that Objective-C type parameters are typedef name declarations,
like typedefs and C++11 type aliases, in support of type erasure.
Part of rdar://problem/6294649.
llvm-svn: 241541
different function signatures. (Previously clang would emit all block
pointer types with the type of the first block pointer in the compile
unit.)
rdar://problem/21602473
llvm-svn: 241534
This reverts commit r241244, but restricts SEH support to Win64.
This way, Chromium builds will still fall back on TUs with SEH, and
Clang developers can work on this incrementally upstream while patching
this small predicate locally. It'll also make it easier to review small
fixes.
llvm-svn: 241533
The patch is the same except for the addition of a new test for the
issue that required reverting the dependent llvm commit.
--Original Commit Message--
Pass down the -flto option to the -cc1 job, and from there into the
CodeGenOptions and onto the PassManagerBuilder. This enables gating
the new EliminateAvailableExternally module pass on whether we are
preparing for LTO.
If we are preparing for LTO (e.g. a -flto -c compile), the new pass is not
included as we want to preserve available externally functions for possible
link time inlining.
llvm-svn: 241467
This is needed to use clang's command line option "-ftrap-function" for LTO and
enable changing the trap function name on a per-call-site basis.
rdar://problem/21225723
Differential Revision: http://reviews.llvm.org/D10831
llvm-svn: 241306
While there replace stable_sort of std::string with just sort, stability
is not necessary for "simple" value types. No functional change intended.
llvm-svn: 241299
The next code is generated for this construct:
```
if (__kmpc_cancellationpoint(ident_t *loc, kmp_int32 global_tid, kmp_int32 cncl_kind) != 0)
<exit from outer innermost construct>;
```
llvm-svn: 241239
32-bit finally funclets are intended to be called both directly from the
parent function and indirectly from the EH runtime. Because we aren't
contorting LLVM's X86 prologue to match MSVC's, calling the finally
block directly passes in a different value of EBP than the one that the
runtime provides. We need an adapter thunk to adjust EBP to the expected
value. However, WinEHPrepare already has to solve this problem when
cleanups are not pre-outlined, so we can go ahead and rely on it rather
than duplicating work.
Now we only do the llvm.x86.seh.recoverfp dance for 32-bit SEH filter
functions.
llvm-svn: 241187
This re-lands r236052 and adds support for __exception_code().
In 32-bit SEH, the exception code is not available in eax. It is only
available in the filter function, and now we arrange to load it and
store it into an escaped variable in the parent frame.
As a consequence, we have to disable the "catch i8* null" optimization
on 32-bit and always generate a filter function. We can re-enable the
optimization if we detect an __except block that doesn't use the
exception code, but this probably isn't worth optimizing.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D10852
llvm-svn: 241171
Function static variables, typedefs and records (class, struct or union) declared inside
a lexical scope were associated with the function as their parent scope, rather than the
lexical scope they are defined or declared in.
This fixes PR19238
Patch by: amjad.aboud@intel.com
Differential Revision: http://reviews.llvm.org/D9760
llvm-svn: 241154
When an internal-linkage thunk is code gen'd, CodeGenVTables::emitThunk
will first be called with ForVTable=true (which incorrectly set the
thunk's linkage to available_externally under the Itanium ABI) and later
with ForVTable=false (which reset it to internal). Because we will always
see a call with ForVTable=false, this incorrect linkage never ended up in
the final IR. However, the temporary presence of this linkage caused us
to give such functions a comdat as a result of code introduced in r241102.
To avoid this, check that the thunk is externally visible before giving it
available_externally linkage.
llvm-svn: 241136
The LifetimeExtendedCleanupHeader is carefully fit into 32 bytes,
meaning that cleanups on the LifetimeExtendedCleanupStack are *always*
allocated at a misaligned address and cause undefined behaviour.
There are two ways to solve this - add padding after the header when
we allocated our cleanups, or just simplify the header and let it use
64 bits in the first place. I've opted for the latter, and added a
static assert to avoid the issue in the future.
llvm-svn: 241133
using a string map to canonicalize. Fix up a couple of testcases
that needed changing since we are no longer simply appending features
to the list, but all of their mask dependencies as well.
llvm-svn: 241129
Previously we were not assigning a comdat to thunks in the Microsoft ABI,
which would have required us to emit these functions outside of a comdat.
(Due to an inconsistency in how we were emitting objects, we were getting this
right most of the time, but only when compiling with function sections.) This
code generator change causes us to create a comdat for each thunk.
Differential Revision: http://reviews.llvm.org/D10829
llvm-svn: 241102
This allows a module-aware debugger such as LLDB to import the currently
visible modules before dropping into the expression evaluator.
rdar://problem/20965932
llvm-svn: 241084
isTriviallyRecursive is a hack used to bridge a gap between the
expectations that source code assumes and the semantics that LLVM IR can
provide. Specifically, asm labels on functions are treated as an
explicit name for a GlobalObject in Clang but treated like an
output-processing step in GCC. Tweak this hack a little further to emit
calls to library functions instead of emitting an incorrect definition.
The definition in question would have available_externally linkage (this
is OK) but result in a call to itself which will either result in an
infinite loop or stack overflow.
This fixes PR23964.
llvm-svn: 241043
MSVC only genreates array cookies if the class has a destructor. This
is problematic when having to call T::operator delete[](void *, size_t)
because the second argument's argument is impossible to synthesize
correctly if the class has no destructor (because there will be no array
cookie).
Instead, MSVC passes the size of the class. Do the same, for
compatibility, instead of crashing.
This fixes PR23990.
llvm-svn: 241038
This matches the implementation of the gcc support for the same
feature, including checking the values set up by libgcc at runtime.
The structure looks like this:
unsigned int __cpu_vendor;
unsigned int __cpu_type;
unsigned int __cpu_subtype;
unsigned int __cpu_features[1];
with a set of enums to match various fields that are field out after
parsing the output of the cpuid instruction.
This also adds a set of errors checking for valid input (and cpu).
compiler-rt support for this and the other builtins in this family
(__builtin_cpu_init and __builtin_cpu_is) are forthcoming.
llvm-svn: 240994
We failed to see that we should have deferred the creation of a type
which references a type currently under construction because of atomic
sugar.
This fixes PR23985.
llvm-svn: 240989
We had two separate paths for member pointer conversion: one which
takes a constant and another which takes an arbitrary value. In the
latter case, we are permitted to construct arbitrary instructions.
It turns out that the bulk of the member pointer conversion is sharable
if we construct an artificial IRBuilder.
llvm-svn: 240921
This patch corresponds to review:
http://reviews.llvm.org/D10637
This is the first round of additions of missing builtins listed in the ABI document. More to come (this builds onto what seurer already addes). This patch adds:
vector signed long long vec_abs(vector signed long long)
vector double vec_abs(vector double)
vector signed long long vec_add(vector signed long long, vector signed long long)
vector unsigned long long vec_add(vector unsigned long long, vector unsigned long long)
vector double vec_add(vector double, vector double)
vector double vec_and(vector bool long long, vector double)
vector double vec_and(vector double, vector bool long long)
vector double vec_and(vector double, vector double)
vector signed long long vec_and(vector signed long long, vector signed long long)
vector double vec_andc(vector bool long long, vector double)
vector double vec_andc(vector double, vector bool long long)
vector double vec_andc(vector double, vector double)
vector signed long long vec_andc(vector signed long long, vector signed long long)
vector double vec_ceil(vector double)
vector bool long long vec_cmpeq(vector double, vector double)
vector bool long long vec_cmpge(vector double, vector double)
vector bool long long vec_cmpge(vector signed long long, vector signed long long)
vector bool long long vec_cmpge(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmpgt(vector double, vector double)
vector bool long long vec_cmple(vector double, vector double)
vector bool long long vec_cmple(vector signed long long, vector signed long long)
vector bool long long vec_cmple(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmplt(vector double, vector double)
vector bool long long vec_cmplt(vector signed long long, vector signed long long)
vector bool long long vec_cmplt(vector unsigned long long, vector unsigned long long)
llvm-svn: 240821
Skip calls to HasTrivialDestructorBody() in the case where the
destructor is never invoked. Alternatively, Richard proposed to change
Sema to declare a trivial destructor for anonymous union member, which
seems too wasteful.
Differential Revision: http://reviews.llvm.org/D10508
llvm-svn: 240742
When a profile file cannot be opened, we used to display just the error
message but not the name of the profile the compiler was trying to open.
This will become useful in the next set of patches that introduce
GCC-compatible flags to specify profiles.
llvm-svn: 240715
Integer variants are implemented as atomicrmw or cmpxchg instructions.
Atomic add for floating point (__nvvm_atom_add_gen_f()) is implemented
as a call to an overloaded @llvm.nvvm.atomic.load.add.f32.* LVVM
intrinsic.
Differential Revision: http://reviews.llvm.org/D10666
llvm-svn: 240669
Summary:
Byval argument pair formation assumes that if a type is less than 8 bytes
it must be an integer and not a pointer, which is not true for x32 and NaCl.
Relax the assertion and add a test for a codegen case that triggered it.
Reviewers: jvoung
Subscribers: jfb, cfe-commits
Differential Revision: http://reviews.llvm.org/D10701
llvm-svn: 240600
If task directive has associated 'depend' clause then function kmp_int32 __kmpc_omp_task_with_deps ( ident_t *loc_ref, kmp_int32 gtid, kmp_task_t * new_task, kmp_int32 ndeps, kmp_depend_info_t *dep_list,kmp_int32 ndeps_noalias, kmp_depend_info_t *noalias_dep_list) must be called instead of __kmpc_omp_task().
If this directive has associated 'if' clause then also before a call of kmpc_omp_task_begin_if0() a function void __kmpc_omp_wait_deps ( ident_t *loc_ref, kmp_int32 gtid, kmp_int32 ndeps, kmp_depend_info_t *dep_list, kmp_int32 ndeps_noalias, kmp_depend_info_t *noalias_dep_list) must be called.
Array sections are not supported yet.
llvm-svn: 240532
This fixes a serious bug in r240462: checking the BuiltinID for
ARM::BI_MoveToCoprocessor* in EmitBuiltinExpr() ignores the fact that
each target has an overlapping range of the BuiltinID values. That check
can trigger for builtins from other targets, leading to very bad
behavior.
Part of the reason I did not implement r240462 this way to begin with is
the special handling of the last argument for Neon builtins. In this
change, I have factored out the check to see which builtins have that
extra argument into a new HasExtraNeonArgument() function. There is still
some awkwardness in having to check for those builtins in two separate
places, i.e., once to see if the extra argument is present and once to
generate the appropriate IR, but this seems much cleaner than my previous
patch.
llvm-svn: 240522
The Microsoft-extension _MoveToCoprocessor and _MoveToCoprocessor2
builtins take the register value to be moved as the first argument,
but the corresponding mcr and mcr2 LLVM intrinsics expect that value
to be the third argument. Handle this as a special case, while still
leaving those intrinsics as generic MSBuiltins. I considered the
alternative of handling these in EmitARMBuiltinExpr, but that does
not work well for the follow-up change that I'm going to make to improve
the error handling for PR22560 -- we need the GetBuiltinType() checks
for ICEArguments, and the ARM version of that code is only used for
Neon intrinsics where the last argument is special and not
checked in the normal way.
llvm-svn: 240462
Virtual inheritance member pointers are always relative to the vbindex,
even when the member pointer doesn't point into a virtual base. This is
corrected by adjusting the non-virtual offset backwards from the vbptr
back to the top of the most derived class. While we performed this
adjustment when manifesting member pointers as constants or when
performing conversions, we didn't perform the adjustment when mangling
them.
llvm-svn: 240453
Parsing and sema analysis (without support for array sections in arguments) for 'depend' clause (used in 'task' directive, OpenMP 4.0).
llvm-svn: 240409
Member pointers in the MS ABI are made complicated due to the following:
- Virtual methods in the most derived class (MDC) might live in a
vftable in a virtual base.
- There are four different representations of member pointer: single
inheritance, multiple inheritance, virtual inheritance and the "most
general" representation.
- Bases might have a *more* general representation than classes which
derived from them, a most surprising result.
We believed that we could treat all member pointers as-if they were a
degenerate case of the multiple inheritance model. This fell apart once
we realized that implementing standard member pointers using this ABI
requires referencing members with a non-zero vbindex.
On a bright note, all but the virtual inheritance model operate rather
similarly. The virtual inheritance member pointer representation
awkwardly requires a virtual base adjustment in order to refer to
entities in the MDC.
However, the first virtual base might be quite far from the start of the
virtual base. This means that we must add a negative non-virtual
displacement.
However, things get even more complicated. The most general
representation interprets vbindex zero differently from the virtual
inheritance model: it doesn't reference the vbtable at all.
It turns out that this complexity can increase for quite some time:
consider a derived to base conversion from the most general model to the
multiple inheritance model...
To manage this complexity we introduce a concept of "normalized" member
pointer which allows us to treat all three models as the most general
model. Then we try to figure out how to map this generalized member
pointer onto the destination member pointer model. I've done my best to
furnish the code with comments explaining why each adjustment is
performed.
This fixes PR23878.
llvm-svn: 240384
The MS ABI has very complicated member pointers. Don't attempt to
synthesize the final member pointer ab ovo usque ad mala in one go.
Instead, start with a member pointer which points to the declaration in
question as-if it's decl context was the target class. Then, utilize
our conversion logical to convert it to the target type.
This allows us to simplify how we think about member pointers because we
don't need to consider non-zero nv adjustments before we even generate
the member pointer. Furthermore, it gives our adjustment logic more
exposure by utilizing it in a common path.
llvm-svn: 240383
As specified in the SysV AVX512 ABI drafts. It follows the same scheme
as AVX2:
Arguments of type __m512 are split into eight eightbyte chunks.
The least significant one belongs to class SSE and all the others
to class SSEUP.
This also means we change the OpenMP SIMD default alignment on AVX512.
Based on r240337.
Differential Revision: http://reviews.llvm.org/D9894
llvm-svn: 240338
The patch is generated using this command:
$ tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
work/llvm/tools/clang
To reduce churn, not touching namespaces spanning less than 10 lines.
llvm-svn: 240270
Testcase provided, in the PR, by Christian Shelton and
reduced by David Majnemer.
PR: 23584
Differential Revision: http://reviews.llvm.org/D10508
Reviewed by: rnk
llvm-svn: 240242
This patch adds initial support for the -fsanitize=kernel-address flag to Clang.
Right now it's quite restricted: only out-of-line instrumentation is supported, globals are not instrumented, some GCC kasan flags are not supported.
Using this patch I am able to build and boot the KASan tree with LLVMLinux patches from github.com/ramosian-glider/kasan/tree/kasan_llvmlinux.
To disable KASan instrumentation for a certain function attribute((no_sanitize("kernel-address"))) can be used.
llvm-svn: 240131
Clang's control flow integrity implementation works by conceptually attaching
"tags" (in the form of bitset entries) to each virtual table, identifying
the names of the classes that the virtual table is compatible with. Under
the Itanium ABI, it is simple to assign tags to virtual tables; they are
simply the address points, which are available via VTableLayout. Because any
overridden methods receive an entry in the derived class's virtual table,
a check for an overridden method call can always be done by checking the
tag of whichever derived class overrode the method call.
The Microsoft ABI is a little different, as it does not directly use address
points, and overrides in a derived class do not cause new virtual table entries
to be added to the derived class; instead, the slot in the base class is
reused, and the compiler needs to adjust the this pointer at the call site
to (generally) the base class that initially defined the method. After the
this pointer has been adjusted, we cannot check for the derived class's tag,
as the virtual table may not be compatible with the derived class. So we
need to determine which base class we have been adjusted to.
Specifically, at each call site, we use ASTRecordLayout to identify the most
derived class whose virtual table is laid out at the "this" pointer offset
we are using to make the call, and check the virtual table for that tag.
Because address point information is unavailable, we "reconstruct" it as
follows: any virtual tables we create for a non-derived class receive a tag
for that class, and virtual tables for a base class inside a derived class
receive a tag for the base class, together with tags for any derived classes
which are laid out at the same position as the derived class (and therefore
have compatible virtual tables).
Differential Revision: http://reviews.llvm.org/D10520
llvm-svn: 240117
This causes programs compiled with this flag to print a diagnostic when
a control flow integrity check fails instead of aborting. Diagnostics are
printed using UBSan's runtime library.
The main motivation of this feature over -fsanitize=vptr is fidelity with
the -fsanitize=cfi implementation: the diagnostics are printed under exactly
the same conditions as those which would cause -fsanitize=cfi to abort the
program. This means that the same restrictions apply regarding compiling
all translation units with -fsanitize=cfi, cross-DSO virtual calls are
forbidden, etc.
Differential Revision: http://reviews.llvm.org/D10268
llvm-svn: 240109
This flag controls whether a given sanitizer traps upon detecting
an error. It currently only supports UBSan. The existing flag
-fsanitize-undefined-trap-on-error has been made an alias of
-fsanitize-trap=undefined.
This change also cleans up some awkward behavior around the combination
of -fsanitize-trap=undefined and -fsanitize=undefined. Previously we
would reject command lines containing the combination of these two flags,
as -fsanitize=vptr is not compatible with trapping. This required the
creation of -fsanitize=undefined-trap, which excluded -fsanitize=vptr
(and -fsanitize=function, but this seems like an oversight).
Now, -fsanitize=undefined is an alias for -fsanitize=undefined-trap,
and if -fsanitize-trap=undefined is specified, we treat -fsanitize=vptr
as an "unsupported" flag, which means that we error out if the flag is
specified explicitly, but implicitly disable it if the flag was implied
by -fsanitize=undefined.
Differential Revision: http://reviews.llvm.org/D10464
llvm-svn: 240105
The most general model has fields for the vbptr offset and the vbindex.
Don't initialize the vbptr offset if the vbindex is 0: we aren't
referencing an entity from a vbase.
Getting this wrong can make member pointer equality fail.
llvm-svn: 240043
Added parsing, sema analysis and codegen for '#pragma omp taskgroup' directive (OpenMP 4.0).
The code for directive is generated the following way:
#pragma omp taskgroup
<body>
void __kmpc_taskgroup(<loc>, thread_id);
<body>
void __kmpc_end_taskgroup(<loc>, thread_id);
llvm-svn: 240011
Added codegen for combined 'omp for simd' directives, that is a combination of 'omp for' directive followed by 'omp simd' directive. Includes support for all clauses.
llvm-svn: 239990
The following code is generated for reduction clause within 'omp simd' loop construct:
#pragma omp simd reduction(op:var)
for (...)
<body>
alloca priv_var
priv_var = <initial reduction value>;
<loop_start>:
<body> // references to original 'var' are replaced by 'priv_var'
<loop_end>:
var op= priv_var;
llvm-svn: 239881
Previously the last iteration for simd loop-based OpenMP constructs were generated as a separate code. This feature is not required and codegen is simplified.
llvm-svn: 239810
We were propagating the coverage map into the body of an if statement,
but not into the condition thereafter. This is fine as long as the two
locations are in the same virtual file, but they won't be when the
"if" part of the statement is from a macro and the condition is not.
llvm-svn: 239803
This patch adds the -fsanitize=safe-stack command line argument for clang,
which enables the Safe Stack protection (see http://reviews.llvm.org/D6094
for the detailed description of the Safe Stack).
This patch is our implementation of the safe stack on top of Clang. The
patches make the following changes:
- Add -fsanitize=safe-stack and -fno-sanitize=safe-stack options to clang
to control safe stack usage (the safe stack is disabled by default).
- Add __attribute__((no_sanitize("safe-stack"))) attribute to clang that can be
used to disable the safe stack for individual functions even when enabled
globally.
Original patch by Volodymyr Kuznetsov and others at the Dependable Systems
Lab at EPFL; updates and upstreaming by myself.
Differential Revision: http://reviews.llvm.org/D6095
llvm-svn: 239762
in section 10.1, __arm_{w,r}sr{,p,64}.
This includes arm_acle.h definitions with builtins and codegen to support
these, the intrinsics are implemented by generating read/write_register calls
which get appropriately lowered in the backend based on the register string
provided. SemaChecking is also implemented to fault invalid parameters.
Differential Revision: http://reviews.llvm.org/D9697
llvm-svn: 239737
Instead, just EvaluateAsInt().
Follow-up to r239549: rsmith points out that isICE() is expensive;
seems like it's not the right concept anyway, as it fails on
`static const' in C, and will actually trigger the assert below on:
test/Sema/inline-asm-validate-x86.c
llvm-svn: 239651
Summary:
In addition to easier syntax, IRBuilder makes sure to set correct
debug locations for newly added instructions (bitcast and
llvm.lifetime itself). This restores the original behavior, which
was modified by r234581 (reapplied as r235553).
Extend one of the tests to check for debug locations.
Test Plan: regression test suite
Reviewers: aadg, dblaikie
Subscribers: cfe-commits, majnemer
Differential Revision: http://reviews.llvm.org/D10418
llvm-svn: 239643
If llvm.lifetime.end turns out to be the first instruction in the last
basic block, we can decrement the iterator twice, going past rend.
At the moment, this can never happen because llvm.lifetime.end always
goes immediately after bitcast, but relying on this is very brittle.
llvm-svn: 239638
Right now we're ignoring the fpmath attribute since there's no
backend support for a feature like this and to do so would require
checking the validity of the strings and doing general subtarget
feature parsing of valid and invalid features with the target
attribute feature.
llvm-svn: 239582
Modeled after the gcc attribute of the same name, this feature
allows source level annotations to correspond to backend code
generation. In llvm particular parlance, this allows the adding
of subtarget features and changing the cpu for a particular function
based on source level hints.
This has been added into the existing support for function level
attributes without particular verification for any target outside
of whether or not the backend will support the features/cpu given
(similar to section, etc).
llvm-svn: 239579
Specifying #pragma clang loop vectorize(assume_safety) on a loop adds the
mem.parallel_loop_access metadata to each load/store operation in the loop. This
metadata tells loop access analysis (LAA) to skip memory dependency checking.
llvm-svn: 239572
For inline assembly immediate constraints, we currently always use
EmitScalarExpr, instead of directly emitting the constant. When the
overflow sanitizer is enabled, this generates overflow intrinsics
instead of constants.
Instead, emit a constant for constraints that either require an
immediate (e.g. 'I' on X86), or only accepts constants (immediate
or symbolic; i.e., don't accept registers or memory).
Fixes PR19763.
Differential Revision: http://reviews.llvm.org/D10255
llvm-svn: 239549
Remove the restriction which forbade forming pointers to member
functions which had parameter types or return types which were not
convertible.
llvm-svn: 239499
CodeGenOptions and onto the PassManagerBuilder. This enables gating
the new EliminateAvailableExternally module pass on whether we are
preparing for LTO.
If we are preparing for LTO (e.g. a -flto -c compile), the new pass is not
included as we want to preserve available externally functions for possible
link time inlining.
llvm-svn: 239481
Based on previous discussion on the mailing list, clang currently lacks support
for C99 partial re-initialization behavior:
Reference: http://lists.cs.uiuc.edu/pipermail/cfe-dev/2013-April/029188.html
Reference: http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_253.htm
This patch attempts to fix this problem.
Given the following code snippet,
struct P1 { char x[6]; };
struct LP1 { struct P1 p1; };
struct LP1 l = { .p1 = { "foo" }, .p1.x[2] = 'x' };
// this example is adapted from the example for "struct fred x[]" in DR-253;
// currently clang produces in l: { "\0\0x" },
// whereas gcc 4.8 produces { "fox" };
// with this fix, clang will also produce: { "fox" };
Differential Review: http://reviews.llvm.org/D5789
llvm-svn: 239446
This commit adds back the code that seems to have been dropped unintentionally
in r176985.
rdar://problem/13752163
Differential Revision: http://reviews.llvm.org/D10100
llvm-svn: 239426