If this assert does fire, the no-asserts behaviour is an infinite
loop. It's better to crash in this case so we get a crash report and
stop wasting the user's cpu cycles.
llvm-svn: 242591
That platform has alignof(uint64_t) == 4, but, since LLVM_ALIGNAS(...)
cannot take anything but literal integers due to MSVC limitations, the
literal '8' used there didn't match. Switch ScopeStackAlignment to
just use 8, as well.
llvm-svn: 242578
Currently, -save-temp will cause ObjCARC optimization to be dropped,
sanitizer pass to run early in the pipeline, and profiling
instrumentation to run twice.
Fix the issue by properly disable all passes in the optimization
pipeline when generating bitcode output and parse some of the Language
Options even when the input is bitcode so the passes can be setup
correctly.
llvm-svn: 242565
Some const-correctness changes snuck in here too, since they were in the
area of code I was modifying.
This seems to make Clang actually work without Bus Error on
32bit-sparc.
Follow-up patches will factor out a trailing-object helper class, to
make classes using the idiom of appending objects to other objects
easier to understand, and to ensure (with static_assert) that required
alignment guarantees continue to hold.
Differential Revision: http://reviews.llvm.org/D10272
llvm-svn: 242554
We shouldn't crash despite the AMD64 ABI not giving clear guidance as to
how to pass around vector types <= 32 bits. Instead, classify such
vectors as INTEGER to be compatible with GCC.
This fixes PR24162.
llvm-svn: 242508
- introduces a new cc1 option -fmodule-format=[raw,obj]
with 'raw' being the default
- supports arbitrary module container formats that libclang is agnostic to
- adds the format to the module hash to avoid collisions
- splits the old PCHContainerOperations into PCHContainerWriter and
a PCHContainerReader.
Thanks to Richard Smith for reviewing this patch!
llvm-svn: 242499
We now use the sanitizer special case list to decide which types to blacklist.
We also support a special blacklist entry for types with a uuid attribute,
which are generally COM types whose virtual tables are defined externally.
Differential Revision: http://reviews.llvm.org/D11096
llvm-svn: 242286
This patch corresponds to review:
http://reviews.llvm.org/D11184
A number of new interfaces for altivec.h (as mandated by the ABI):
vector float vec_cpsgn(vector float, vector float)
vector double vec_cpsgn(vector double, vector double)
vector double vec_or(vector bool long long, vector double)
vector double vec_or(vector double, vector bool long long)
vector double vec_re(vector double)
vector signed char vec_cntlz(vector signed char)
vector unsigned char vec_cntlz(vector unsigned char)
vector short vec_cntlz(vector short)
vector unsigned short vec_cntlz(vector unsigned short)
vector int vec_cntlz(vector int)
vector unsigned int vec_cntlz(vector unsigned int)
vector signed long long vec_cntlz(vector signed long long)
vector unsigned long long vec_cntlz(vector unsigned long long)
vector signed char vec_nand(vector bool signed char, vector signed char)
vector signed char vec_nand(vector signed char, vector bool signed char)
vector signed char vec_nand(vector signed char, vector signed char)
vector unsigned char vec_nand(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector unsigned char)
vector short vec_nand(vector bool short, vector short)
vector short vec_nand(vector short, vector bool short)
vector short vec_nand(vector short, vector short)
vector unsigned short vec_nand(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector unsigned short)
vector int vec_nand(vector bool int, vector int)
vector int vec_nand(vector int, vector bool int)
vector int vec_nand(vector int, vector int)
vector unsigned int vec_nand(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector unsigned int)
vector signed long long vec_nand(vector bool long long, vector signed long long)
vector signed long long vec_nand(vector signed long long, vector bool long long)
vector signed long long vec_nand(vector signed long long, vector signed long long)
vector unsigned long long vec_nand(vector bool long long, vector unsigned long long)
vector unsigned long long vec_nand(vector unsigned long long, vector bool long long)
vector unsigned long long vec_nand(vector unsigned long long, vector unsigned long long)
vector signed char vec_orc(vector bool signed char, vector signed char)
vector signed char vec_orc(vector signed char, vector bool signed char)
vector signed char vec_orc(vector signed char, vector signed char)
vector unsigned char vec_orc(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector unsigned char)
vector short vec_orc(vector bool short, vector short)
vector short vec_orc(vector short, vector bool short)
vector short vec_orc(vector short, vector short)
vector unsigned short vec_orc(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector unsigned short)
vector int vec_orc(vector bool int, vector int)
vector int vec_orc(vector int, vector bool int)
vector int vec_orc(vector int, vector int)
vector unsigned int vec_orc(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector unsigned int)
vector signed long long vec_orc(vector bool long long, vector signed long long)
vector signed long long vec_orc(vector signed long long, vector bool long long)
vector signed long long vec_orc(vector signed long long, vector signed long long)
vector unsigned long long vec_orc(vector bool long long, vector unsigned long long)
vector unsigned long long vec_orc(vector unsigned long long, vector bool long long)
vector unsigned long long vec_orc(vector unsigned long long, vector unsigned long long)
vector signed char vec_div(vector signed char, vector signed char)
vector unsigned char vec_div(vector unsigned char, vector unsigned char)
vector signed short vec_div(vector signed short, vector signed short)
vector unsigned short vec_div(vector unsigned short, vector unsigned short)
vector signed int vec_div(vector signed int, vector signed int)
vector unsigned int vec_div(vector unsigned int, vector unsigned int)
vector signed long long vec_div(vector signed long long, vector signed long long)
vector unsigned long long vec_div(vector unsigned long long, vector unsigned long long)
vector unsigned char vec_mul(vector unsigned char, vector unsigned char)
vector unsigned int vec_mul(vector unsigned int, vector unsigned int)
vector unsigned long long vec_mul(vector unsigned long long, vector unsigned long long)
vector unsigned short vec_mul(vector unsigned short, vector unsigned short)
vector signed char vec_mul(vector signed char, vector signed char)
vector signed int vec_mul(vector signed int, vector signed int)
vector signed long long vec_mul(vector signed long long, vector signed long long)
vector signed short vec_mul(vector signed short, vector signed short)
vector signed long long vec_mergeh(vector signed long long, vector signed long long)
vector signed long long vec_mergeh(vector signed long long, vector bool long long)
vector signed long long vec_mergeh(vector bool long long, vector signed long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergeh(vector bool long long, vector unsigned long long)
vector double vec_mergeh(vector double, vector double)
vector double vec_mergeh(vector double, vector bool long long)
vector double vec_mergeh(vector bool long long, vector double)
vector signed long long vec_mergel(vector signed long long, vector signed long long)
vector signed long long vec_mergel(vector signed long long, vector bool long long)
vector signed long long vec_mergel(vector bool long long, vector signed long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergel(vector bool long long, vector unsigned long long)
vector double vec_mergel(vector double, vector double)
vector double vec_mergel(vector double, vector bool long long)
vector double vec_mergel(vector bool long long, vector double)
vector signed int vec_pack(vector signed long long, vector signed long long)
vector unsigned int vec_pack(vector unsigned long long, vector unsigned long long)
vector bool int vec_pack(vector bool long long, vector bool long long)
llvm-svn: 242171
The fix is to remove duplicate copy-initialization of the only memcpy-able struct member and to correct the address of aggregately initialized members in destructors' calls during stack unwinding (in order to obtain address of struct member by using GEP instead of 'bitcast').
Differential Revision: http://reviews.llvm.org/D10990
llvm-svn: 242127
which is actually the variable backing up the llvm -time-passes command
line argument.
llvm::TimePassesIsEnabled is actually being initialized in CodeGenAction.
llvm-svn: 242099
Under the -fsanitize-memory-use-after-dtor (disabled by default) insert
an MSan runtime library call at the end of every destructor.
Patch by Naomi Musgrave.
llvm-svn: 242097
Previously, clang/llvm treated inline-asm instructions conservatively,
choosing not to eliminate the instructions or hoisting them out of a loop
even when it was safe to do so. This commit makes changes to attach a
readonly or readnone attribute to an inline-asm instruction, which enables
passes such as LICM and EarlyCSE to move or optimize away the instruction.
rdar://problem/11358192
Differential Revision: http://reviews.llvm.org/D10546
llvm-svn: 241930
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:
struct XBitfield {
unsigned b1 : 10;
unsigned b2 : 12;
unsigned b3 : 10;
};
struct YBitfield {
char x;
struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;
unsigned test7() {
// CHECK: @test7
// CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
return gbitfield.y.b2;
}
The "align 4" is actually wrong. Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.
This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp. Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.
Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).
In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage. This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.
Differential Revision: http://reviews.llvm.org/D11034
llvm-svn: 241916
Code in CGCall.cpp that loads up function arguments that need to be
coerced to a different type may in some cases ignore the fact that
the source of the argument is not naturally aligned. This may cause
incorrect code to be generated. In some places in CreateCoercedLoad,
we already have setAlignment calls to address this, but I ran into one
where it was missing, causing wrong code generation on SystemZ.
However, in that location, we do not actually know what alignment of
the source location we can rely on; the callers do not pass anything
to this routine. This is already an issue in other places in
CreateCoercedLoad; and the same problem exists for CreateCoercedStore.
To avoid pessimising code, and to fix the FIXMEs already in place,
this patch also adds an alignment argument to the CreateCoerced*
routines and uses it instead of forcing an alignment of 1. The
callers are changed to pass in the best information they have.
This actually requires changes in a number of existing test cases
since we now get better alignment in many places.
Differential Revision: http://reviews.llvm.org/D11033
llvm-svn: 241898
We were previously creating bit set entries at virtual table offset
sizeof(void*) unconditionally under the Microsoft C++ ABI. This is incorrect
if RTTI data is disabled; in that case the "address point" is at offset
0. This change modifies bit set emission to take into account whether RTTI
data is being emitted.
Also make a start on a blacklisting scheme for records.
Differential Revision: http://reviews.llvm.org/D11048
llvm-svn: 241845
Move the diagnostic back to codegen so that we can compile ATL on the
self-host bot. We don't actually end up emitting code for the __try, so
the diagnostic won't be hit.
llvm-svn: 241761
For Mips direct-to-nacl, the goal is to be close to le32 front-end and
use Mips32EL backend. This patch defines new NaClMips32ELTargetInfo and
modifies it slightly to be close to le32. It also adds necessary parts,
inline with ARM and X86.
Differential Revision: http://reviews.llvm.org/D10739
llvm-svn: 241678
The fix is to emit cleanup for arrays of memcpy-able objects in struct if an exception is thrown later during copy-construction.
Differential Revision: http://reviews.llvm.org/D10989
llvm-svn: 241670
We didn't correctly process the case where a base class is classified as
MEMORY. This would cause us to trip over an assertion.
This fixes PR24020.
Differential Revision: http://reviews.llvm.org/D10907
llvm-svn: 241667
We forgot to run postMerge after decided that the union had to be
classified as MEMORY. This left us with Lo == MEMORY and Hi == SSEUp
which is an invalid combination.
This fixes PR24021.
Differential Revision: http://reviews.llvm.org/D10908
llvm-svn: 241666
This patch adds ObjectFilePCHContainerOperations uses the LLVM backend
to put the contents of a PCH into a __clangast section inside a COFF, ELF,
or Mach-O object file container.
This is done to facilitate module debugging by makeing it possible to
store the debug info for the types defined by a module alongside the AST.
rdar://problem/20091852
llvm-svn: 241620
When messaging a method that was defined in an Objective-C class (or
category or extension thereof) that has type parameters, substitute
the type arguments for those type parameters. Similarly, substitute
into property accesses, instance variables, and other references.
This includes general infrastructure for substituting the type
arguments associated with an ObjCObject(Pointer)Type into a type
referenced within a particular context, handling all of the
substitutions required to deal with (e.g.) inheritance involving
parameterized classes. In cases where no type arguments are available
(e.g., because we're messaging via some unspecialized type, id, etc.),
we substitute in the type bounds for the type parameters instead.
Example:
@interface NSSet<T : id<NSCopying>> : NSObject <NSCopying>
- (T)firstObject;
@end
void f(NSSet<NSString *> *stringSet, NSSet *anySet) {
[stringSet firstObject]; // produces NSString*
[anySet firstObject]; // produces id<NSCopying> (the bound)
}
When substituting for the type parameters given an unspecialized
context (i.e., no specific type arguments were given), substituting
the type bounds unconditionally produces type signatures that are too
strong compared to the pre-generics signatures. Instead, use the
following rule:
- In covariant positions, such as method return types, replace type
parameters with “id” or “Class” (the latter only when the type
parameter bound is “Class” or qualified class, e.g,
“Class<NSCopying>”)
- In other positions (e.g., parameter types), replace type
parameters with their type bounds.
- When a specialized Objective-C object or object pointer type
contains a type parameter in its type arguments (e.g.,
NSArray<T>*, but not NSArray<NSString *> *), replace the entire
object/object pointer type with its unspecialized version (e.g.,
NSArray *).
llvm-svn: 241543
Produce type parameter declarations for Objective-C type parameters,
and attach lists of type parameters to Objective-C classes,
categories, forward declarations, and extensions as
appropriate. Perform semantic analysis of type bounds for type
parameters, both in isolation and across classes/categories/extensions
to ensure consistency.
Also handle (de-)serialization of Objective-C type parameter lists,
along with sundry other things one must do to add a new declaration to
Clang.
Note that Objective-C type parameters are typedef name declarations,
like typedefs and C++11 type aliases, in support of type erasure.
Part of rdar://problem/6294649.
llvm-svn: 241541
different function signatures. (Previously clang would emit all block
pointer types with the type of the first block pointer in the compile
unit.)
rdar://problem/21602473
llvm-svn: 241534
This reverts commit r241244, but restricts SEH support to Win64.
This way, Chromium builds will still fall back on TUs with SEH, and
Clang developers can work on this incrementally upstream while patching
this small predicate locally. It'll also make it easier to review small
fixes.
llvm-svn: 241533
The patch is the same except for the addition of a new test for the
issue that required reverting the dependent llvm commit.
--Original Commit Message--
Pass down the -flto option to the -cc1 job, and from there into the
CodeGenOptions and onto the PassManagerBuilder. This enables gating
the new EliminateAvailableExternally module pass on whether we are
preparing for LTO.
If we are preparing for LTO (e.g. a -flto -c compile), the new pass is not
included as we want to preserve available externally functions for possible
link time inlining.
llvm-svn: 241467
This is needed to use clang's command line option "-ftrap-function" for LTO and
enable changing the trap function name on a per-call-site basis.
rdar://problem/21225723
Differential Revision: http://reviews.llvm.org/D10831
llvm-svn: 241306
While there replace stable_sort of std::string with just sort, stability
is not necessary for "simple" value types. No functional change intended.
llvm-svn: 241299
The next code is generated for this construct:
```
if (__kmpc_cancellationpoint(ident_t *loc, kmp_int32 global_tid, kmp_int32 cncl_kind) != 0)
<exit from outer innermost construct>;
```
llvm-svn: 241239
32-bit finally funclets are intended to be called both directly from the
parent function and indirectly from the EH runtime. Because we aren't
contorting LLVM's X86 prologue to match MSVC's, calling the finally
block directly passes in a different value of EBP than the one that the
runtime provides. We need an adapter thunk to adjust EBP to the expected
value. However, WinEHPrepare already has to solve this problem when
cleanups are not pre-outlined, so we can go ahead and rely on it rather
than duplicating work.
Now we only do the llvm.x86.seh.recoverfp dance for 32-bit SEH filter
functions.
llvm-svn: 241187
This re-lands r236052 and adds support for __exception_code().
In 32-bit SEH, the exception code is not available in eax. It is only
available in the filter function, and now we arrange to load it and
store it into an escaped variable in the parent frame.
As a consequence, we have to disable the "catch i8* null" optimization
on 32-bit and always generate a filter function. We can re-enable the
optimization if we detect an __except block that doesn't use the
exception code, but this probably isn't worth optimizing.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D10852
llvm-svn: 241171
Function static variables, typedefs and records (class, struct or union) declared inside
a lexical scope were associated with the function as their parent scope, rather than the
lexical scope they are defined or declared in.
This fixes PR19238
Patch by: amjad.aboud@intel.com
Differential Revision: http://reviews.llvm.org/D9760
llvm-svn: 241154
When an internal-linkage thunk is code gen'd, CodeGenVTables::emitThunk
will first be called with ForVTable=true (which incorrectly set the
thunk's linkage to available_externally under the Itanium ABI) and later
with ForVTable=false (which reset it to internal). Because we will always
see a call with ForVTable=false, this incorrect linkage never ended up in
the final IR. However, the temporary presence of this linkage caused us
to give such functions a comdat as a result of code introduced in r241102.
To avoid this, check that the thunk is externally visible before giving it
available_externally linkage.
llvm-svn: 241136
The LifetimeExtendedCleanupHeader is carefully fit into 32 bytes,
meaning that cleanups on the LifetimeExtendedCleanupStack are *always*
allocated at a misaligned address and cause undefined behaviour.
There are two ways to solve this - add padding after the header when
we allocated our cleanups, or just simplify the header and let it use
64 bits in the first place. I've opted for the latter, and added a
static assert to avoid the issue in the future.
llvm-svn: 241133
using a string map to canonicalize. Fix up a couple of testcases
that needed changing since we are no longer simply appending features
to the list, but all of their mask dependencies as well.
llvm-svn: 241129
Previously we were not assigning a comdat to thunks in the Microsoft ABI,
which would have required us to emit these functions outside of a comdat.
(Due to an inconsistency in how we were emitting objects, we were getting this
right most of the time, but only when compiling with function sections.) This
code generator change causes us to create a comdat for each thunk.
Differential Revision: http://reviews.llvm.org/D10829
llvm-svn: 241102
This allows a module-aware debugger such as LLDB to import the currently
visible modules before dropping into the expression evaluator.
rdar://problem/20965932
llvm-svn: 241084
isTriviallyRecursive is a hack used to bridge a gap between the
expectations that source code assumes and the semantics that LLVM IR can
provide. Specifically, asm labels on functions are treated as an
explicit name for a GlobalObject in Clang but treated like an
output-processing step in GCC. Tweak this hack a little further to emit
calls to library functions instead of emitting an incorrect definition.
The definition in question would have available_externally linkage (this
is OK) but result in a call to itself which will either result in an
infinite loop or stack overflow.
This fixes PR23964.
llvm-svn: 241043
MSVC only genreates array cookies if the class has a destructor. This
is problematic when having to call T::operator delete[](void *, size_t)
because the second argument's argument is impossible to synthesize
correctly if the class has no destructor (because there will be no array
cookie).
Instead, MSVC passes the size of the class. Do the same, for
compatibility, instead of crashing.
This fixes PR23990.
llvm-svn: 241038
This matches the implementation of the gcc support for the same
feature, including checking the values set up by libgcc at runtime.
The structure looks like this:
unsigned int __cpu_vendor;
unsigned int __cpu_type;
unsigned int __cpu_subtype;
unsigned int __cpu_features[1];
with a set of enums to match various fields that are field out after
parsing the output of the cpuid instruction.
This also adds a set of errors checking for valid input (and cpu).
compiler-rt support for this and the other builtins in this family
(__builtin_cpu_init and __builtin_cpu_is) are forthcoming.
llvm-svn: 240994
We failed to see that we should have deferred the creation of a type
which references a type currently under construction because of atomic
sugar.
This fixes PR23985.
llvm-svn: 240989
We had two separate paths for member pointer conversion: one which
takes a constant and another which takes an arbitrary value. In the
latter case, we are permitted to construct arbitrary instructions.
It turns out that the bulk of the member pointer conversion is sharable
if we construct an artificial IRBuilder.
llvm-svn: 240921
This patch corresponds to review:
http://reviews.llvm.org/D10637
This is the first round of additions of missing builtins listed in the ABI document. More to come (this builds onto what seurer already addes). This patch adds:
vector signed long long vec_abs(vector signed long long)
vector double vec_abs(vector double)
vector signed long long vec_add(vector signed long long, vector signed long long)
vector unsigned long long vec_add(vector unsigned long long, vector unsigned long long)
vector double vec_add(vector double, vector double)
vector double vec_and(vector bool long long, vector double)
vector double vec_and(vector double, vector bool long long)
vector double vec_and(vector double, vector double)
vector signed long long vec_and(vector signed long long, vector signed long long)
vector double vec_andc(vector bool long long, vector double)
vector double vec_andc(vector double, vector bool long long)
vector double vec_andc(vector double, vector double)
vector signed long long vec_andc(vector signed long long, vector signed long long)
vector double vec_ceil(vector double)
vector bool long long vec_cmpeq(vector double, vector double)
vector bool long long vec_cmpge(vector double, vector double)
vector bool long long vec_cmpge(vector signed long long, vector signed long long)
vector bool long long vec_cmpge(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmpgt(vector double, vector double)
vector bool long long vec_cmple(vector double, vector double)
vector bool long long vec_cmple(vector signed long long, vector signed long long)
vector bool long long vec_cmple(vector unsigned long long, vector unsigned long long)
vector bool long long vec_cmplt(vector double, vector double)
vector bool long long vec_cmplt(vector signed long long, vector signed long long)
vector bool long long vec_cmplt(vector unsigned long long, vector unsigned long long)
llvm-svn: 240821
Skip calls to HasTrivialDestructorBody() in the case where the
destructor is never invoked. Alternatively, Richard proposed to change
Sema to declare a trivial destructor for anonymous union member, which
seems too wasteful.
Differential Revision: http://reviews.llvm.org/D10508
llvm-svn: 240742
When a profile file cannot be opened, we used to display just the error
message but not the name of the profile the compiler was trying to open.
This will become useful in the next set of patches that introduce
GCC-compatible flags to specify profiles.
llvm-svn: 240715
Integer variants are implemented as atomicrmw or cmpxchg instructions.
Atomic add for floating point (__nvvm_atom_add_gen_f()) is implemented
as a call to an overloaded @llvm.nvvm.atomic.load.add.f32.* LVVM
intrinsic.
Differential Revision: http://reviews.llvm.org/D10666
llvm-svn: 240669
Summary:
Byval argument pair formation assumes that if a type is less than 8 bytes
it must be an integer and not a pointer, which is not true for x32 and NaCl.
Relax the assertion and add a test for a codegen case that triggered it.
Reviewers: jvoung
Subscribers: jfb, cfe-commits
Differential Revision: http://reviews.llvm.org/D10701
llvm-svn: 240600
If task directive has associated 'depend' clause then function kmp_int32 __kmpc_omp_task_with_deps ( ident_t *loc_ref, kmp_int32 gtid, kmp_task_t * new_task, kmp_int32 ndeps, kmp_depend_info_t *dep_list,kmp_int32 ndeps_noalias, kmp_depend_info_t *noalias_dep_list) must be called instead of __kmpc_omp_task().
If this directive has associated 'if' clause then also before a call of kmpc_omp_task_begin_if0() a function void __kmpc_omp_wait_deps ( ident_t *loc_ref, kmp_int32 gtid, kmp_int32 ndeps, kmp_depend_info_t *dep_list, kmp_int32 ndeps_noalias, kmp_depend_info_t *noalias_dep_list) must be called.
Array sections are not supported yet.
llvm-svn: 240532
This fixes a serious bug in r240462: checking the BuiltinID for
ARM::BI_MoveToCoprocessor* in EmitBuiltinExpr() ignores the fact that
each target has an overlapping range of the BuiltinID values. That check
can trigger for builtins from other targets, leading to very bad
behavior.
Part of the reason I did not implement r240462 this way to begin with is
the special handling of the last argument for Neon builtins. In this
change, I have factored out the check to see which builtins have that
extra argument into a new HasExtraNeonArgument() function. There is still
some awkwardness in having to check for those builtins in two separate
places, i.e., once to see if the extra argument is present and once to
generate the appropriate IR, but this seems much cleaner than my previous
patch.
llvm-svn: 240522
The Microsoft-extension _MoveToCoprocessor and _MoveToCoprocessor2
builtins take the register value to be moved as the first argument,
but the corresponding mcr and mcr2 LLVM intrinsics expect that value
to be the third argument. Handle this as a special case, while still
leaving those intrinsics as generic MSBuiltins. I considered the
alternative of handling these in EmitARMBuiltinExpr, but that does
not work well for the follow-up change that I'm going to make to improve
the error handling for PR22560 -- we need the GetBuiltinType() checks
for ICEArguments, and the ARM version of that code is only used for
Neon intrinsics where the last argument is special and not
checked in the normal way.
llvm-svn: 240462
Virtual inheritance member pointers are always relative to the vbindex,
even when the member pointer doesn't point into a virtual base. This is
corrected by adjusting the non-virtual offset backwards from the vbptr
back to the top of the most derived class. While we performed this
adjustment when manifesting member pointers as constants or when
performing conversions, we didn't perform the adjustment when mangling
them.
llvm-svn: 240453
Parsing and sema analysis (without support for array sections in arguments) for 'depend' clause (used in 'task' directive, OpenMP 4.0).
llvm-svn: 240409
Member pointers in the MS ABI are made complicated due to the following:
- Virtual methods in the most derived class (MDC) might live in a
vftable in a virtual base.
- There are four different representations of member pointer: single
inheritance, multiple inheritance, virtual inheritance and the "most
general" representation.
- Bases might have a *more* general representation than classes which
derived from them, a most surprising result.
We believed that we could treat all member pointers as-if they were a
degenerate case of the multiple inheritance model. This fell apart once
we realized that implementing standard member pointers using this ABI
requires referencing members with a non-zero vbindex.
On a bright note, all but the virtual inheritance model operate rather
similarly. The virtual inheritance member pointer representation
awkwardly requires a virtual base adjustment in order to refer to
entities in the MDC.
However, the first virtual base might be quite far from the start of the
virtual base. This means that we must add a negative non-virtual
displacement.
However, things get even more complicated. The most general
representation interprets vbindex zero differently from the virtual
inheritance model: it doesn't reference the vbtable at all.
It turns out that this complexity can increase for quite some time:
consider a derived to base conversion from the most general model to the
multiple inheritance model...
To manage this complexity we introduce a concept of "normalized" member
pointer which allows us to treat all three models as the most general
model. Then we try to figure out how to map this generalized member
pointer onto the destination member pointer model. I've done my best to
furnish the code with comments explaining why each adjustment is
performed.
This fixes PR23878.
llvm-svn: 240384
The MS ABI has very complicated member pointers. Don't attempt to
synthesize the final member pointer ab ovo usque ad mala in one go.
Instead, start with a member pointer which points to the declaration in
question as-if it's decl context was the target class. Then, utilize
our conversion logical to convert it to the target type.
This allows us to simplify how we think about member pointers because we
don't need to consider non-zero nv adjustments before we even generate
the member pointer. Furthermore, it gives our adjustment logic more
exposure by utilizing it in a common path.
llvm-svn: 240383
As specified in the SysV AVX512 ABI drafts. It follows the same scheme
as AVX2:
Arguments of type __m512 are split into eight eightbyte chunks.
The least significant one belongs to class SSE and all the others
to class SSEUP.
This also means we change the OpenMP SIMD default alignment on AVX512.
Based on r240337.
Differential Revision: http://reviews.llvm.org/D9894
llvm-svn: 240338
The patch is generated using this command:
$ tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
-checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
work/llvm/tools/clang
To reduce churn, not touching namespaces spanning less than 10 lines.
llvm-svn: 240270
Testcase provided, in the PR, by Christian Shelton and
reduced by David Majnemer.
PR: 23584
Differential Revision: http://reviews.llvm.org/D10508
Reviewed by: rnk
llvm-svn: 240242
This patch adds initial support for the -fsanitize=kernel-address flag to Clang.
Right now it's quite restricted: only out-of-line instrumentation is supported, globals are not instrumented, some GCC kasan flags are not supported.
Using this patch I am able to build and boot the KASan tree with LLVMLinux patches from github.com/ramosian-glider/kasan/tree/kasan_llvmlinux.
To disable KASan instrumentation for a certain function attribute((no_sanitize("kernel-address"))) can be used.
llvm-svn: 240131
Clang's control flow integrity implementation works by conceptually attaching
"tags" (in the form of bitset entries) to each virtual table, identifying
the names of the classes that the virtual table is compatible with. Under
the Itanium ABI, it is simple to assign tags to virtual tables; they are
simply the address points, which are available via VTableLayout. Because any
overridden methods receive an entry in the derived class's virtual table,
a check for an overridden method call can always be done by checking the
tag of whichever derived class overrode the method call.
The Microsoft ABI is a little different, as it does not directly use address
points, and overrides in a derived class do not cause new virtual table entries
to be added to the derived class; instead, the slot in the base class is
reused, and the compiler needs to adjust the this pointer at the call site
to (generally) the base class that initially defined the method. After the
this pointer has been adjusted, we cannot check for the derived class's tag,
as the virtual table may not be compatible with the derived class. So we
need to determine which base class we have been adjusted to.
Specifically, at each call site, we use ASTRecordLayout to identify the most
derived class whose virtual table is laid out at the "this" pointer offset
we are using to make the call, and check the virtual table for that tag.
Because address point information is unavailable, we "reconstruct" it as
follows: any virtual tables we create for a non-derived class receive a tag
for that class, and virtual tables for a base class inside a derived class
receive a tag for the base class, together with tags for any derived classes
which are laid out at the same position as the derived class (and therefore
have compatible virtual tables).
Differential Revision: http://reviews.llvm.org/D10520
llvm-svn: 240117
This causes programs compiled with this flag to print a diagnostic when
a control flow integrity check fails instead of aborting. Diagnostics are
printed using UBSan's runtime library.
The main motivation of this feature over -fsanitize=vptr is fidelity with
the -fsanitize=cfi implementation: the diagnostics are printed under exactly
the same conditions as those which would cause -fsanitize=cfi to abort the
program. This means that the same restrictions apply regarding compiling
all translation units with -fsanitize=cfi, cross-DSO virtual calls are
forbidden, etc.
Differential Revision: http://reviews.llvm.org/D10268
llvm-svn: 240109
This flag controls whether a given sanitizer traps upon detecting
an error. It currently only supports UBSan. The existing flag
-fsanitize-undefined-trap-on-error has been made an alias of
-fsanitize-trap=undefined.
This change also cleans up some awkward behavior around the combination
of -fsanitize-trap=undefined and -fsanitize=undefined. Previously we
would reject command lines containing the combination of these two flags,
as -fsanitize=vptr is not compatible with trapping. This required the
creation of -fsanitize=undefined-trap, which excluded -fsanitize=vptr
(and -fsanitize=function, but this seems like an oversight).
Now, -fsanitize=undefined is an alias for -fsanitize=undefined-trap,
and if -fsanitize-trap=undefined is specified, we treat -fsanitize=vptr
as an "unsupported" flag, which means that we error out if the flag is
specified explicitly, but implicitly disable it if the flag was implied
by -fsanitize=undefined.
Differential Revision: http://reviews.llvm.org/D10464
llvm-svn: 240105
The most general model has fields for the vbptr offset and the vbindex.
Don't initialize the vbptr offset if the vbindex is 0: we aren't
referencing an entity from a vbase.
Getting this wrong can make member pointer equality fail.
llvm-svn: 240043
Added parsing, sema analysis and codegen for '#pragma omp taskgroup' directive (OpenMP 4.0).
The code for directive is generated the following way:
#pragma omp taskgroup
<body>
void __kmpc_taskgroup(<loc>, thread_id);
<body>
void __kmpc_end_taskgroup(<loc>, thread_id);
llvm-svn: 240011
Added codegen for combined 'omp for simd' directives, that is a combination of 'omp for' directive followed by 'omp simd' directive. Includes support for all clauses.
llvm-svn: 239990
The following code is generated for reduction clause within 'omp simd' loop construct:
#pragma omp simd reduction(op:var)
for (...)
<body>
alloca priv_var
priv_var = <initial reduction value>;
<loop_start>:
<body> // references to original 'var' are replaced by 'priv_var'
<loop_end>:
var op= priv_var;
llvm-svn: 239881
Previously the last iteration for simd loop-based OpenMP constructs were generated as a separate code. This feature is not required and codegen is simplified.
llvm-svn: 239810
We were propagating the coverage map into the body of an if statement,
but not into the condition thereafter. This is fine as long as the two
locations are in the same virtual file, but they won't be when the
"if" part of the statement is from a macro and the condition is not.
llvm-svn: 239803
This patch adds the -fsanitize=safe-stack command line argument for clang,
which enables the Safe Stack protection (see http://reviews.llvm.org/D6094
for the detailed description of the Safe Stack).
This patch is our implementation of the safe stack on top of Clang. The
patches make the following changes:
- Add -fsanitize=safe-stack and -fno-sanitize=safe-stack options to clang
to control safe stack usage (the safe stack is disabled by default).
- Add __attribute__((no_sanitize("safe-stack"))) attribute to clang that can be
used to disable the safe stack for individual functions even when enabled
globally.
Original patch by Volodymyr Kuznetsov and others at the Dependable Systems
Lab at EPFL; updates and upstreaming by myself.
Differential Revision: http://reviews.llvm.org/D6095
llvm-svn: 239762
in section 10.1, __arm_{w,r}sr{,p,64}.
This includes arm_acle.h definitions with builtins and codegen to support
these, the intrinsics are implemented by generating read/write_register calls
which get appropriately lowered in the backend based on the register string
provided. SemaChecking is also implemented to fault invalid parameters.
Differential Revision: http://reviews.llvm.org/D9697
llvm-svn: 239737
Instead, just EvaluateAsInt().
Follow-up to r239549: rsmith points out that isICE() is expensive;
seems like it's not the right concept anyway, as it fails on
`static const' in C, and will actually trigger the assert below on:
test/Sema/inline-asm-validate-x86.c
llvm-svn: 239651
Summary:
In addition to easier syntax, IRBuilder makes sure to set correct
debug locations for newly added instructions (bitcast and
llvm.lifetime itself). This restores the original behavior, which
was modified by r234581 (reapplied as r235553).
Extend one of the tests to check for debug locations.
Test Plan: regression test suite
Reviewers: aadg, dblaikie
Subscribers: cfe-commits, majnemer
Differential Revision: http://reviews.llvm.org/D10418
llvm-svn: 239643
If llvm.lifetime.end turns out to be the first instruction in the last
basic block, we can decrement the iterator twice, going past rend.
At the moment, this can never happen because llvm.lifetime.end always
goes immediately after bitcast, but relying on this is very brittle.
llvm-svn: 239638
Right now we're ignoring the fpmath attribute since there's no
backend support for a feature like this and to do so would require
checking the validity of the strings and doing general subtarget
feature parsing of valid and invalid features with the target
attribute feature.
llvm-svn: 239582
Modeled after the gcc attribute of the same name, this feature
allows source level annotations to correspond to backend code
generation. In llvm particular parlance, this allows the adding
of subtarget features and changing the cpu for a particular function
based on source level hints.
This has been added into the existing support for function level
attributes without particular verification for any target outside
of whether or not the backend will support the features/cpu given
(similar to section, etc).
llvm-svn: 239579
Specifying #pragma clang loop vectorize(assume_safety) on a loop adds the
mem.parallel_loop_access metadata to each load/store operation in the loop. This
metadata tells loop access analysis (LAA) to skip memory dependency checking.
llvm-svn: 239572
For inline assembly immediate constraints, we currently always use
EmitScalarExpr, instead of directly emitting the constant. When the
overflow sanitizer is enabled, this generates overflow intrinsics
instead of constants.
Instead, emit a constant for constraints that either require an
immediate (e.g. 'I' on X86), or only accepts constants (immediate
or symbolic; i.e., don't accept registers or memory).
Fixes PR19763.
Differential Revision: http://reviews.llvm.org/D10255
llvm-svn: 239549
Remove the restriction which forbade forming pointers to member
functions which had parameter types or return types which were not
convertible.
llvm-svn: 239499
CodeGenOptions and onto the PassManagerBuilder. This enables gating
the new EliminateAvailableExternally module pass on whether we are
preparing for LTO.
If we are preparing for LTO (e.g. a -flto -c compile), the new pass is not
included as we want to preserve available externally functions for possible
link time inlining.
llvm-svn: 239481
Based on previous discussion on the mailing list, clang currently lacks support
for C99 partial re-initialization behavior:
Reference: http://lists.cs.uiuc.edu/pipermail/cfe-dev/2013-April/029188.html
Reference: http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_253.htm
This patch attempts to fix this problem.
Given the following code snippet,
struct P1 { char x[6]; };
struct LP1 { struct P1 p1; };
struct LP1 l = { .p1 = { "foo" }, .p1.x[2] = 'x' };
// this example is adapted from the example for "struct fred x[]" in DR-253;
// currently clang produces in l: { "\0\0x" },
// whereas gcc 4.8 produces { "fox" };
// with this fix, clang will also produce: { "fox" };
Differential Review: http://reviews.llvm.org/D5789
llvm-svn: 239446
This commit adds back the code that seems to have been dropped unintentionally
in r176985.
rdar://problem/13752163
Differential Revision: http://reviews.llvm.org/D10100
llvm-svn: 239426
On ARM/AArch64, we currently always use EmitScalarExpr for the immediate
builtin arguments, instead of directly emitting the constant. When the
overflow sanitizer is enabled, this generates overflow intrinsics
instead of constants, breaking assumptions in various places.
Instead, use the knowledge of "immediates" to directly emit a constant:
- teach the tablegen backend to emit the "immediate" modifiers
- use those modifiers in the NEON CodeGen, on ARM and AArch64.
Fixes PR23517.
Differential Revision: http://reviews.llvm.org/D10045
llvm-svn: 239002
This patch fixes an assertion failure in method
'X86_64ABIInfo::GetByteVectorType'.
Method 'GetByteVectorType' (in TargetInfo.cpp) is responsible
for mapping a QualType 'Ty' (for an argument or return value) to an LLVM IR
type that, according to the ABI, must be passed in a XMM/YMM vector register.
When selecting the IR vector type, method 'GetByteVectorType' always tries to
choose the "best" IR vector type for the 'Ty' in input. In particular, if Ty
is a wrapper structure, it keeps unwrapping it until it finds a vector type VTy.
That VTy is the "preferred IR type".
However, function 'isSingleElementStructure' (used to unwrap structures) does
not know how to look through union types. So, before this patch, if Ty was in
a nest of wrapper structures with at least two union types, we would have
triggered an assertion failure (added at revision 230971).
With this patch, if method 'GetByteVectorType' fails to find the preferred
vector type, we just return a valid (although potentially 'less friendly')
vector type based on the type size. So, rather than asserting on an 'unexpected'
'Ty' in input, we conservatively return vector type <2 x double> if Ty is 16
bytes, or <4 x double> if Ty is 32 bytes.
Differential Revision: http://reviews.llvm.org/D10190
llvm-svn: 238861
The first named data member is the field used to default initialize the
union. An IndirectFieldDecl can introduce the first named data member
of a union.
llvm-svn: 238649
If the type isn't trivially moveable emplace can skip a potentially
expensive move. It also saves a couple of characters.
Call sites were found with the ASTMatcher + some semi-automated cleanup.
memberCallExpr(
argumentCountIs(1), callee(methodDecl(hasName("push_back"))),
on(hasType(recordDecl(has(namedDecl(hasName("emplace_back")))))),
hasArgument(0, bindTemporaryExpr(
hasType(recordDecl(hasNonTrivialDestructor())),
has(constructExpr()))),
unless(isInTemplateInstantiation()))
No functional change intended.
llvm-svn: 238601
This is a follow-up to r238266. It turned out structors are codegened through a different path,
and didn't get the storage class set in EmitGlobalFunctionDefinition.
llvm-svn: 238443
The representation of a pointer-to-member in the MS ABI is governed by
the layout of the relevant class or if a model has been explicitly
specified. If no model is specified, then an appropriate
"worst-case-scenario" model is implicitly chosen if, and only, if the
pointer-to-member type's representation was needed.
Debug info cannot force a pointer-to-member type to have a
representation so do not try to query the size of such a type unless we
know it is safe to do so.
llvm-svn: 238259
Types can be classified as being zero-initializable or
non-zero-initializable. We used to classify array types by giving them
the classification of their base element type. However, incomplete
array types are never initialized directly and thus are always
zero-initializable.
llvm-svn: 238256
Re-land the change r238200, but with modifications in the tests that should
prevent new failures in some environments as reported with the original
change on the mailing list.
llvm-svn: 238253
On MIPS unsigned int type should not be zero extended but sign-extended.
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D9198
llvm-svn: 238200
We already have the ABI, we don't need a "HasAVX" flag.
This will also makes it easier to add an AVX512 ABI.
No functional change intended.
llvm-svn: 237989
If loop control variable in a worksharing construct is marked as lastprivate, we should copy last calculated value of private counter back to original variable.
llvm-svn: 237879
-fprofile-instr-generate does not emit counter increment intrinsics
for Dtor_Deleting and Dtor_Complete destructors with assigned
counters. This causes unnecessary [-Wprofile-instr-out-of-date]
warnings during profile-use runs even if the source has never been
modified since profile collection.
Patch by Betul Buyukkurt. Thanks!
llvm-svn: 237804
Patch fixes codegen for aggregate copying of VLAs. Currently method CodeGenFunction::EmitAggregateCopy() does not support copying of VLAs. Patch checks if the size of the type is 0, then checks if the type is actually a variable-length array. Then it calculates total length for this array and calculates total size of the array in bytes:
<total number of elements in array> * aligned_sizeof(ElementType) (if copy assignment is requested).
If simple copying is requested, size is calculated like:
<total number of elements in array> * aligned_sizeof(ElementType) - aligned_sizeof(ElementType) + sizeof(ElementType).
memcpy() is used with this calculated size of the VLA.
Differential Revision: http://reviews.llvm.org/D9851
llvm-svn: 237768
The implicit conversion was causing issues for a helper being added that
would take an llvm::Function rather than an llvm::Value to make the
CallInst. Since we'll eventually need to specify the type of the call
explicitly anyway, fix these up to avoid the future ambiguity.
llvm-svn: 237729
This modification generates proper copyin/initialization sequences for array variables/parameters. Before they were considered as pointers, not arrays.
llvm-svn: 237691
Also add trivial handling of transparent unions.
PPC32, MSP430, and XCore apparently all rely on DefaultABIInfo. This
should worry you, because DefaultABIInfo is not implementing the rules
of any particular ABI.
Fixes PR23097, patch by Andy Gibbs.
llvm-svn: 237630
Internal task structure must be generated like
typedef struct kmp_task {
void * shareds;
kmp_routine_entry_t routine;
kmp_int32 part_id;
kmp_routine_entry_t destructors;
} kmp_task_t;
struct kmp_task_t_with_privates {
kmp_task_t task_data;
.kmp_private. privates;
};
to avoid possible additional alignment bytes in first fields (shareds, routine, part_id and destructors). Runtime library is not aware of such kind additional alignment bytes.
llvm-svn: 237561
With this change, enabling -fmodules-local-submodule-visibility results in name
visibility rules being applied to submodules of the current module in addition
to imported modules (that is, names no longer "leak" between submodules of the
same top-level module). This also makes it much safer to textually include a
non-modular library into a module: each submodule that textually includes that
library will get its own "copy" of that library, and so the library becomes
visible no matter which including submodule you import.
llvm-svn: 237473
The issue I was trying to solve in r236547 was about built-in macros,
but I disabled coverage in all system macros. This is actually a bit
of overkill, and makes the display of coverage around system macros
degrade unnecessarily. Instead, limit this to builtins specifically.
llvm-svn: 237397
Summary:
Space on stack allocated for unused structures returned by functions was unused
even when it's lifetime didn't intersect with lifetime of any other objects that
could use the same space.
The test added also checks for named and auto objects. It seems to make sense
to have this all in one place.
Reviewers: aadg, rsmith, rjmccall, rnk
Reviewed By: rnk
Subscribers: asl, cfe-commits
Differential Revision: http://reviews.llvm.org/D9743
llvm-svn: 237385
GetOutputStream() owns the stream it returns pointer to and the
pointer should never be freed by us. When we fail to load and exit
early, unique_ptr still holds the pointer and frees it which leads to
compiler crash when CompilerInstance attempts to free it again.
Added regression test for failed bitcode linking.
Differential Revision: http://reviews.llvm.org/D9625
llvm-svn: 237159
'schedule' clause for combined directives requires additional processing. Special helper variable is generated, that is captured in the outlined parallel region for 'parallel for' region. This captured variable is used to store chunk expression from the 'schedule' clause in this 'parallel for' region.
llvm-svn: 237100
We didn't supporting taking the address of virtual member functions
which overrode a method in a virtual base. We simply need to encode the
virtual base index in the member pointer.
This fixes PR23452.
N.B. There is no data member pointer side to this change because taking
the address of a virtual bases' data member gives you a member pointer
whose type is derived from the virtual bases' type, not the most derived
type.
llvm-svn: 236962
MSVC 2015 renamed the symbol found by name lookup for 'std::terminate'
so we cannot rely on using '?terminate@@YAXXZ'. Furthermore, it seems
that 2015 will be the first release of MSVC which permits inlining a
function which is noexcept into a function which isn't. This is
implemented by creating a cleanup for the invoker which jumps to
__std_terminate. Clang's implementation of this aspect of the MSVC
scheme is slightly less efficient in this respect because we use a
catch handler configured as a catch-all handler instead.
llvm-svn: 236961
Patch from Geoff Berry <gberry@codeaurora.org>
Fix BackendConsumer::EmitOptimizationMessage() to check if the
DiagnosticInfoOptimizationBase object has a valid location before
calling getLocation() to avoid dereferencing a null pointer inside
getLocation() when no debug info is present.
llvm-svn: 236898
Functions with available_externally linkage will not be emitted to object
files (they will just be undefined symbols), so it does not make sense to
put them in comdats.
Creates a second overload of maybeSetTrivialComdat that uses the GlobalObject
instead of the Decl, and uses that in several places that had the faulty
logic.
Differential Revision: http://reviews.llvm.org/D9580
llvm-svn: 236879
Summary:
Possible coverage levels are:
* -fsanitize-coverage=func - function-level coverage
* -fsanitize-coverage=bb - basic-block-level coverage
* -fsanitize-coverage=edge - edge-level coverage
Extra features are:
* -fsanitize-coverage=indirect-calls - coverage for indirect calls
* -fsanitize-coverage=trace-bb - tracing for basic blocks
* -fsanitize-coverage=trace-cmp - tracing for cmp instructions
* -fsanitize-coverage=8bit-counters - frequency counters
Levels and features can be combined in comma-separated list, and
can be disabled by subsequent -fno-sanitize-coverage= flags, e.g.:
-fsanitize-coverage=bb,trace-bb,8bit-counters -fno-sanitize-coverage=trace-bb
is equivalient to:
-fsanitize-coverage=bb,8bit-counters
Original semantics of -fsanitize-coverage flag is preserved:
* -fsanitize-coverage=0 disables the coverage
* -fsanitize-coverage=1 is a synonym for -fsanitize-coverage=func
* -fsanitize-coverage=2 is a synonym for -fsanitize-coverage=bb
* -fsanitize-coverage=3 is a synonym for -fsanitize-coverage=edge
* -fsanitize-coverage=4 is a synonym for -fsanitize-coverage=edge,indirect-calls
Driver tries to diagnose invalid flag usage, in particular:
* At most one level (func,bb,edge) must be specified.
* "trace-bb" and "8bit-counters" features require some level to be specified.
See test case for more examples.
Test Plan: regression test suite
Reviewers: kcc
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D9577
llvm-svn: 236790
- added -fcuda-include-gpubinary option to incorporate results of
device-side compilation into host-side one.
- generate code to register GPU binaries and associated kernels
with CUDA runtime and clean-up on exit.
- added test case for init/deinit code generation.
Differential Revision: http://reviews.llvm.org/D9507
llvm-svn: 236765
Summary:
The next step is to add user-friendly control over these options
to driver via -fsanitize-coverage= option.
Test Plan: regression test suite
Reviewers: kcc
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D9545
llvm-svn: 236756
Fix for codegen of static variables declared inside of captured statements. Captured statements are actually a transparent DeclContexts, so we have to skip them when trying to get a mangled name for statics.
Differential Revision: http://reviews.llvm.org/D9522
llvm-svn: 236701
The MSVC 2015 ABI utilizes a rather straightforward adaptation of the
algorithm found in the appendix of N2382. While we are here, implement
support for emitting cleanups if an exception is thrown while we are
intitializing a static local variable.
llvm-svn: 236697
Inner bodies of OpenMP worksharing loop-based constructs with dynamic or guided scheduling are allowed to be marked with !llvm.mem.parallel_loop_access metadata for better optimization. Worksharing constructs with static scheduling cannot be marked this way (according to OpenMP standard "A data dependence between the same logical iterations in two such loops is guaranteed").
Constructs with auto and runtime scheduling are also not marked because automatically chosen scheduling may be static also.
Differential Revision: http://reviews.llvm.org/D9518
llvm-svn: 236693
Fixed codegen for reduction operations min, max, && and ||. Codegen for them is quite similar and I was confused by this similarity.
Also added a call to kmpc_end_reduce() in atomic part of reduction codegen (call to kmpc_end_reduce_nowait() is not required).
Differential Revision: http://reviews.llvm.org/D9513
llvm-svn: 236689
All callers should be passing `CXXConstructorDecl` or
`CXXDestructorDecl` here, so use `cast<>` instead of `dyn_cast<>` when
setting up the `GlobalDecl`.
llvm-svn: 236651
It doesn't make much sense to try to show coverage inside system
macros, and source locations in builtins confuses the coverage
mapping. Just avoid doing this.
Fixes an assert that fired when a __block storage specifier starts a
region.
llvm-svn: 236547
This adds low-level builtins to allow access to all of the z13 vector
instructions. Note that instructions whose semantics can be described
by standard C (including clang extensions) do not get any builtins.
For each instructions whose semantics *cannot* (fully) be described, we
define a builtin named __builtin_s390_<insn> that directly maps to this
instruction. These are intended to be compatible with GCC.
For instructions that also set the condition code, the builtin will take
an extra argument of type "int *" at the end. The integer pointed to by
this argument will be set to the post-instruction CC value.
For many instructions, the low-level builtin is mapped to the corresponding
LLVM IR intrinsic. However, a number of instructions can be represented
in standard LLVM IR without requiring use of a target intrinsic.
Some instructions require immediate integer operands within a certain
range. Those are verified at the Sema level.
Based on a patch by Richard Sandiford.
llvm-svn: 236532
This patch adds support for the z13 architecture type. For compatibility
with GCC, a pair of options -mvx / -mno-vx can be used to selectively
enable/disable use of the vector facility.
When the vector facility is present, we default to the new vector ABI.
This is characterized by two major differences:
- Vector types are passed/returned in vector registers
(except for unnamed arguments of a variable-argument list function).
- Vector types are at most 8-byte aligned.
The reason for the choice of 8-byte vector alignment is that the hardware
is able to efficiently load vectors at 8-byte alignment, and the ABI only
guarantees 8-byte alignment of the stack pointer, so requiring any higher
alignment for vectors would require dynamic stack re-alignment code.
However, for compatibility with old code that may use vector types, when
*not* using the vector facility, the old alignment rules (vector types
are naturally aligned) remain in use.
These alignment rules are not only implemented at the C language level,
but also at the LLVM IR level. This is done by selecting a different
DataLayout string depending on whether the vector ABI is in effect or not.
Based on a patch by Richard Sandiford.
llvm-svn: 236531
Destructors are never called for cleanups, so we can't use SmallVector as a member.
Differential Revision: http://reviews.llvm.org/D9399
llvm-svn: 236491
Destructors are never called for cleanups, so we can't use SmallVector as a member.
Differential Revision: http://reviews.llvm.org/D9399
llvm-svn: 236487
Destructors are never called for cleanups, so we can't use SmallVector as a member.
Differential Revision: http://reviews.llvm.org/D9399
llvm-svn: 236482
Destructors are never called for cleanups, so we can't use SmallVector as a member.
Differential Revision: http://reviews.llvm.org/D9399
llvm-svn: 236480
For tasks codegen for private/firstprivate variables are different rather than for other directives.
1. Build an internal structure of privates for each private variable:
struct .kmp_privates_t. {
Ty1 var1;
...
Tyn varn;
};
2. Add a new field to kmp_task_t type with list of privates.
struct kmp_task_t {
void * shareds;
kmp_routine_entry_t routine;
kmp_int32 part_id;
kmp_routine_entry_t destructors;
.kmp_privates_t. privates;
};
3. Create a function with destructors calls for all privates after end of task region.
kmp_int32 .omp_task_destructor.(kmp_int32 gtid, kmp_task_t *tt) {
~Destructor(&tt->privates.var1);
...
~Destructor(&tt->privates.varn);
return 0;
}
4. Perform initialization of all firstprivate fields (by simple copying for POD data, copy constructor calls for classes) + provide address of a destructor function after kmpc_omp_task_alloc() and before kmpc_omp_task() calls.
kmp_task_t *new_task = __kmpc_omp_task_alloc(ident_t *, kmp_int32 gtid, kmp_int32 flags, size_t sizeof_kmp_task_t, size_t sizeof_shareds, kmp_routine_entry_t *task_entry);
CopyConstructor(new_task->privates.var1, *new_task->shareds.var1_ref);
new_task->shareds.var1_ref = &new_task->privates.var1;
...
CopyConstructor(new_task->privates.varn, *new_task->shareds.varn_ref);
new_task->shareds.varn_ref = &new_task->privates.varn;
new_task->destructors = .omp_task_destructor.;
kmp_int32 __kmpc_omp_task(ident_t *, kmp_int32 gtid, kmp_task_t *new_task)
Differential Revision: http://reviews.llvm.org/D9370
llvm-svn: 236479
The fact that PGO has a say in how these branch weights are determined
isn't interesting to most of CodeGen, so it makes more sense for this
API to be accessible via CodeGenFunction rather than CodeGenPGO.
llvm-svn: 236380
The underlying problem is that there is currently no way to run
ObjCARCContract from llvm bitcode which is required by ObjC ARC.
This fix the problem by always enable ObjCARCContract pass if
optimization is enabled. The ObjCARC Contract pass has almost no
overhead on code that is not using ARC.
llvm-svn: 236372
No functional change. This just makes it more obvious that the logic
in ComputeRegionCounts only depends on the counter map and local
state.
llvm-svn: 236370
This removes the RegionCounter class, which is only used as a helper
in teh ComputeRegionCounts stmt visitor. This class is just an extra
layer of abstraction that makes the code harder to follow at this
point, and removing it makes the logic quite a bit more direct.
llvm-svn: 236364
This change is the third of 3 patches to add support for specifying
the profile output from the command line via -fprofile-instr-generate=<path>,
where the specified output path/file will be overridden by the
LLVM_PROFILE_FILE environment variable.
This patch adds the necessary support to the clang frontend, and adds a
new test.
The compiler-rt and llvm parts are r236055 and r236288, respectively.
Patch by Teresa Johnson. Thanks!
llvm-svn: 236289
We were assigning the counter for the body of the loop to the loop
variable initialization for some reason here, but our tests completely
lacked coverage for range-for loops. This fixes that and makes the
logic generally more similar to the logic for a regular for.
llvm-svn: 236277
For tasks codegen for private/firstprivate variables are different rather than for other directives.
1. Build an internal structure of privates for each private variable:
struct .kmp_privates_t. {
Ty1 var1;
...
Tyn varn;
};
2. Add a new field to kmp_task_t type with list of privates.
struct kmp_task_t {
void * shareds;
kmp_routine_entry_t routine;
kmp_int32 part_id;
kmp_routine_entry_t destructors;
.kmp_privates_t. privates;
};
3. Create a function with destructors calls for all privates after end of task region.
kmp_int32 .omp_task_destructor.(kmp_int32 gtid, kmp_task_t *tt) {
~Destructor(&tt->privates.var1);
...
~Destructor(&tt->privates.varn);
return 0;
}
4. Perform default initialization of all private fields (no initialization for POD data, default constructor calls for classes) + provide address of a destructor function after kmpc_omp_task_alloc() and before kmpc_omp_task() calls.
kmp_task_t *new_task = __kmpc_omp_task_alloc(ident_t *, kmp_int32 gtid, kmp_int32 flags, size_t sizeof_kmp_task_t, size_t sizeof_shareds, kmp_routine_entry_t *task_entry);
DefaultConstructor(new_task->privates.var1);
new_task->shareds.var1_ref = &new_task->privates.var1;
...
DefaultConstructor(new_task->privates.varn);
new_task->shareds.varn_ref = &new_task->privates.varn;
new_task->destructors = .omp_task_destructor.;
kmp_int32 __kmpc_omp_task(ident_t *, kmp_int32 gtid, kmp_task_t *new_task)
Differential Revision: http://reviews.llvm.org/D9322
llvm-svn: 236207
Fixed initialization of 'single' region completion + changed type of the third argument of __kmpc_copyprivate() runtime function to size_t.
llvm-svn: 236198
and as artificial local variables in the debug info.
This is a follow-up to r236059. We can't get rid of the local variables
entirely because the gdb buildbot depends on them, but we can mark them
as artificial while still emitting the correct debug info. As I learned
from review comments other compilers also follow this model.
A paired commit in LLVM temporarily relaxes the debug info verifier to
not check the integrity of DW_OP_bit_pieces of artificial variables.
rdar://problem/20730771
llvm-svn: 236125
LLVM r236120 renamed debug info IR constructs to use a `DI` prefix, now
that the `DIDescriptor` hierarchy has been gone for about a week. This
commit was generated using the rename-md-di-nodes.sh upgrade script
attached to PR23080, followed by running clang-format-diff.py on the
`lib/` portion of the patch.
llvm-svn: 236121
This issue was fixed elsewhere in r235396 in a more general way, hence these
changes no longer do anything. Keep the testcase however, to ensure that we
don't regress this for ARM.
llvm-svn: 236104
in the debug info. This patch deletes a hack that emits the members
of local anonymous unions as local variables.
Besides being morally wrong, the existing representation using local
variables breaks internal assumptions about the local variables' storage
size.
Compiling
```
void fn1() {
union {
int i;
char c;
};
i = c;
}
```
with -g -O3 -verify will cause the verifier to fail after SROA splits
the 32-bit storage for the "local variable" c into two pieces because the
second piece is clearly outside the 8-bit range that is expected for a
variable of type char. Given the choice I'd rather fix the debug
representation than weaken the verifier.
Debuggers generally already know how to deal with anonymous unions when
they are members of C++ record types, but they may have problems finding
the local anonymous struct members in the expression evaluator.
rdar://problem/20730771
llvm-svn: 236059
This is just the clang-side of 32-bit SEH. LLVM still needs work, and it
will determinstically fail to compile until it's feature complete.
On x86, all outlined handlers have no parameters, but they do implicitly
take the EBP value passed in and use it to address locals of the parent
frame. We model this with llvm.frameaddress(1).
This works (mostly), but __finally block inlining can break it. For now,
we apply the 'noinline' attribute. If we really want to inline __finally
blocks on 32-bit x86, we should teach the inliner how to untangle
frameescape and framerecover.
Promote the error diagnostic from codegen to sema. It now rejects SEH on
non-Windows platforms. LLVM doesn't implement SEH on non-x86 Windows
platforms, but there's nothing preventing it.
llvm-svn: 236052
ability to generate code that CodeGen likes. Test
cases can use this functionality by calling
// RUN: %clang_cc1 -emit-obj -o /dev/null -ast-merge %t.1.ast -ast-merge %t.2.ast %s
llvm-svn: 236011
When creating a global variable with a type of a struct with bitfields, we must
forcibly set the alignment of the global from the RecordDecl. We must do this so
that the proper bitfield alignment makes its way down to LLVM, since clang will
mangle the bitfields into one large type.
llvm-svn: 235976
This makes sure that the front end is specific about what they're expecting
the backend to produce. Update a FIXME with the idea that the target-features
could be more precise using backend knowledge.
llvm-svn: 235936
Currently clang emits file-scope asm during *both* host and device
compilation modes which is usually a wrong thing to do.
There's no way to attach any attribute to an __asm statement, so
there's no way to differentiate between host-side and device-side
file-scope asm. This patch makes clang to match nvcc behavior and
emit file-scope-asm only during host-side compilation.
Differential Revision: http://reviews.llvm.org/D9270
llvm-svn: 235905
Emit the following code for 'taskwait' directive within tied task:
call i32 @__kmpc_omp_taskwait(<loc>, i32 <thread_id>);
Differential Revision: http://reviews.llvm.org/D9245
llvm-svn: 235836
Emit a code for reduction clause. Next code should be emitted for reductions:
static kmp_critical_name lock = { 0 };
void reduce_func(void *lhs[<n>], void *rhs[<n>]) {
*(Type0*)lhs[0] = ReductionOperation0(*(Type0*)lhs[0], *(Type0*)rhs[0]);
...
*(Type<n>-1*)lhs[<n>-1] =
ReductionOperation<n>-1(*(Type<n>-1*)lhs[<n>-1],
*(Type<n>-1*)rhs[<n>-1]);
}
...
void *RedList[<n>] = {&<RHSExprs>[0], ..., &<RHSExprs>[<n>-1]};
switch (__kmpc_reduce{_nowait}(<loc>, <gtid>, <n>, sizeof(RedList), RedList, reduce_func, &<lock>)) {
case 1:
<LHSExprs>[0] = ReductionOperation0(*<LHSExprs>[0], *<RHSExprs>[0]);
...
<LHSExprs>[<n>-1] = ReductionOperation<n>-1(*<LHSExprs>[<n>-1], *<RHSExprs>[<n>-1]);
__kmpc_end_reduce{_nowait}(<loc>, <gtid>, &<lock>);
break;
case 2:
Atomic(<LHSExprs>[0] = ReductionOperation0(*<LHSExprs>[0], *<RHSExprs>[0]));
...
Atomic(<LHSExprs>[<n>-1] = ReductionOperation<n>-1(*<LHSExprs>[<n>-1], *<RHSExprs>[<n>-1]));
break;
default:;
}
Reduction variables are a kind of a private variables, they have private copies, but initial values are chosen in accordance with the reduction operation.
If sections directive has only single section, then original shared variables are used instead with barrier at the end of the directive.
Differential Revision: http://reviews.llvm.org/D9242
llvm-svn: 235835
#pragma omp sections lastprivate(<var>)
<BODY>;
This construct is translated into something like:
<last_iter> = alloca i32
<init for lastprivates>;
<last_iter> = 0
; No initializer for simple variables or a default constructor is called for objects.
; For arrays perform element by element initialization by the call of the default constructor.
...
OMP_FOR_START(...,<last_iter>, ..); sets <last_iter> to 1 if this is the last iteration.
<BODY>
...
OMP_FOR_END
if (<last_iter> != 0) {
<final copy for lastprivate>; Update original variable with the lastprivate value.
}
call __kmpc_cancel_barrier() ; an implicit barrier to avoid possible data race.
If there is only one section, there is no special code generation, original shared variables are used + barrier is emitted at the end of the directive.
Differential Revision: http://reviews.llvm.org/D9240
llvm-svn: 235834
If there are 2 or more sections in a 'section' directive the following code is generated:
<default init for privates>
@__kmpc_for_static_init_4();
<BODY for sections directive>
@__kmpc_for_static_fini()
If there is only one section, the following code is generated:
if (@__kmpc_single()) {
<default init for privates>
@__kmpc_end_single();
}
Differential Revision: http://reviews.llvm.org/D9239
llvm-svn: 235833
Emit the following code for 'single' directive with 'private' clause:
if (@__kmpc_single()) {
<default init for privates>
@__kmpc_end_single();
}
Differential Revision: http://reviews.llvm.org/D9238
llvm-svn: 235832
Fixes rdar://20621065.
A more elegant fix would preclude this case by defining the
rules such that zero-size classes are always formally empty.
I believe the only extensions which create zero-size classes
right now are flexible arrays and zero-length arrays; it's
not abstractly unreasonable to say that those don't count
as members for the purposes of emptiness, just as zero-width
bitfields don't count. But that's an ABI-affecting change
and requires further discussion; in the meantime, let's not
assert / miscompile.
llvm-svn: 235815
This fixes a crash when we're emitting coverage and a macro appears
between two binary conditional operators, ie, "foo ?: MACRO ?: bar",
and fixes the interaction of macros and conditional operators in
general.
llvm-svn: 235793
Emit the following code for 'single' directive with 'firtstprivate' clause:
if (@__kmpc_single()) {
<init for firstprivates>
@__kmpc_end_single();
}
@__kmpc_cancel_barrier(); // To avoid data race in firstprivate init
Differential Revision: http://reviews.llvm.org/D9223
llvm-svn: 235694
Runtime function for 'copyprivate' directive generates implicit barriers, so no need to emit it.
Differential Revision: http://reviews.llvm.org/D9215
llvm-svn: 235692
If there are 2 or more sections in a 'section' directive the following code is generated:
<init for firstprivates>
@__kmpc_cancel_barrier();// To avoid data race in firstprivate init
@__kmpc_for_static_init_4();
<BODY for sections directive>
@__kmpc_for_static_fini()
If there is only one section, the following code is generated:
if (@__kmpc_single()) {
<init for firstprivates>
@__kmpc_end_single();
}
@__kmpc_cancel_barrier(); // To avoid data race in firstprivate init
Differential Revision: http://reviews.llvm.org/D9214
llvm-svn: 235691
The RegionCounter type does a lot of legwork, but most of it is only
meaningful within the implementation of CodeGenPGO. The uses elsewhere
in CodeGen generally just want to increment or read counters, so do
that directly.
llvm-svn: 235664
In r235553, Clang started emitting lifetime markers more often. This
caused false negative in MSan, because MSan only poisons all allocas
once at function entry. Eventually, MSan should poison allocas at
lifetime start and probably also lifetime end, but until then, let's not
emit markers that aren't going to be useful.
llvm-svn: 235613
Adds codegen for 'atomic capture' constructs with the following forms of expressions/statements:
v = x binop= expr;
v = x++;
v = ++x;
v = x--;
v = --x;
v = x = x binop expr;
v = x = expr binop x;
{v = x; x = binop= expr;}
{v = x; x++;}
{v = x; ++x;}
{v = x; x--;}
{v = x; --x;}
{x = x binop expr; v = x;}
{x binop= expr; v = x;}
{x++; v = x;}
{++x; v = x;}
{x--; v = x;}
{--x; v = x;}
{x = x binop expr; v = x;}
{x = expr binop x; v = x;}
{v = x; x = expr;}
If x and expr are integer and binop is associative or x is a LHS in a RHS of the assignment expression, and atomics are allowed for type of x on the target platform atomicrmw instruction is emitted.
Otherwise compare-and-swap sequence is emitted.
Update of 'v' is not required to be be atomic with respect to the read or write of the 'x'.
bb:
...
atomic load <x>
cont:
<expected> = phi [ <x>, label %bb ], [ <new_failed>, %cont ]
<desired> = <expected> binop <expr>
<res> = cmpxchg atomic &<x>, desired, expected
<new_failed> = <res>.field1;
br <res>field2, label %exit, label %cont
exit:
atomic store <old/new x>, <v>
...
Differential Revision: http://reviews.llvm.org/D9049
llvm-svn: 235573
Summary:
Make sure signed overflow in "x--" is checked with
llvm.ssub.with.overflow intrinsic and is reported as:
"-2147483648 - 1 cannot be represented in type 'int'"
instead of:
"-2147483648 + -1 cannot be represented in type 'int'"
, like we do for unsigned overflow.
Test Plan: clang + compiler-rt regression test suite
Reviewers: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D8236
llvm-svn: 235568
We try to use the member variable "FuncName" here, but we've also used
that name as a parameter. This ends with us getting the length of the
function name wrong when we generate the coverage data.
llvm-svn: 235565
These extra endcatch markers aren't helping identify regions to outline,
so let's get rid of them. LLVM outlines (more or less) from begincatch
to endcatch. Any unwind edge from an enclosed invoke is a transition to
a new exception handler, which has it's own outlining markers.
llvm-svn: 235562