GCC has the alloc_align attribute, which is similar to assume_aligned, except the attribute's parameter is the index of the integer parameter that needs aligning to.
Differential Revision: https://reviews.llvm.org/D29599
llvm-svn: 299117
Summary:
The -fxray-always-instrument= and -fxray-never-instrument= flags take
filenames that are used to imbue the XRay instrumentation attributes
using a whitelist mechanism (similar to the sanitizer special cases
list). We use the same syntax and semantics as the sanitizer blacklists
files in the implementation.
As implemented, we respect the attributes that are already defined in
the source file (i.e. those that have the
[[clang::xray_{always,never}_instrument]] attributes) before applying
the always/never instrument lists.
Reviewers: rsmith, chandlerc
Subscribers: jfb, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D30388
llvm-svn: 299041
FPContractModeKind is the codegen option flag which is already ternary (off,
on, fast). This makes it universally the type for the contractable info
across the front-end:
* In FPOptions (i.e. in the Sema + in the expression nodes).
* In LangOpts::DefaultFPContractMode which is the option that initializes
FPOptions in the Sema.
Another way to look at this change is that before fp-contractable on/off were
the only states handled to the front-end:
* For "on", FMA folding was performed by the front-end
* For "fast", we simply forwarded the flag to TargetOptions to handle it in
LLVM
Now off/on/fast are all exposed because for fast we will generate
fast-math-flags during CodeGen.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
---
This is a recommit of r299027 with an adjustment to the test
CodeGenCUDA/fp-contract.cu. The test assumed that even
though -ffp-contract=on is passed FE-based folding of FMA won't happen.
This is obviously wrong since the user is asking for this explicitly with the
option. CUDA is different that -ffp-contract=fast is on by default.
The test used to "work" because contract=fast and contract=on were maintained
separately and we didn't fold in the FE because contract=fast was on due to
the target-default. This patch consolidates the contract=on/fast/off state
into a ternary state hence the change in behavior.
---
Differential Revision: https://reviews.llvm.org/D31167
llvm-svn: 299033
FPContractModeKind is the codegen option flag which is already ternary (off,
on, fast). This makes it universally the type for the contractable info
across the front-end:
* In FPOptions (i.e. in the Sema + in the expression nodes).
* In LangOpts::DefaultFPContractMode which is the option that initializes
FPOptions in the Sema.
Another way to look at this change is that before fp-contractable on/off were
the only states handled to the front-end:
* For "on", FMA folding was performed by the front-end
* For "fast", we simply forwarded the flag to TargetOptions to handle it in
LLVM
Now off/on/fast are all exposed because for fast we will generate
fast-math-flags during CodeGen.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
Differential Revision: https://reviews.llvm.org/D31167
llvm-svn: 299027
Summary:
If promise_type has get_return_object_on_allocation_failure defined,
check if an allocation function returns nullptr, and if so,
return the result of get_return_object_on_allocation_failure().
Reviewers: rsmith, EricWF
Reviewed By: EricWF
Subscribers: mehdi_amini, cfe-commits
Differential Revision: https://reviews.llvm.org/D31399
llvm-svn: 298891
Sema holds the current FPOptions which is adjusted by 'pragma STDC
FP_CONTRACT'. This then gets propagated into expression nodes as they are
built.
This encapsulates FPOptions so that this propagation happens opaquely rather
than directly with the fp_contractable on/off bit. This allows controlled
transitioning of fp_contractable to a ternary value (off, on, fast). It will
also allow adding more fast-math flags later.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
Differential Revision: https://reviews.llvm.org/D31166
llvm-svn: 298877
Details:
Emit suspend expression which roughly looks like:
auto && x = CommonExpr();
if (!x.await_ready()) {
llvm_coro_save();
x.await_suspend(...); (*)
llvm_coro_suspend(); (**)
}
x.await_resume();
where the result of the entire expression is the result of x.await_resume()
(*) If x.await_suspend return type is bool, it allows to veto a suspend:
if (x.await_suspend(...))
llvm_coro_suspend();
(**) llvm_coro_suspend() encodes three possible continuations as a switch instruction:
%where-to = call i8 @llvm.coro.suspend(...)
switch i8 %where-to, label %coro.ret [ ; jump to epilogue to suspend
i8 0, label %yield.ready ; go here when resumed
i8 1, label %yield.cleanup ; go here when destroyed
]
llvm-svn: 298784
attributes.
These patches don't work because we can't currently access the parameter
information in a reliable way when building attributes. I thought this
would be relatively straightforward to fix, but it seems not to be the
case. Fixing this will requrie a substantial re-plumbing of machinery to
allow attributes to be handled in this location, and several other fixes
to the attribute machinery should probably be made at the same time. All
of this will make the patch .... substantially more complicated.
Reverting for now as there are active miscompiles caused by the current
version.
llvm-svn: 298695
Summary:
Clang companion patch to LLVM patch D31027, which adds support
for emitting minimized bitcode file for use in the thin link step.
Add a cc1 option -fthin-link-bitcode=<file> to trigger this behavior.
Depends on D31027.
Reviewers: mehdi_amini, pcc
Subscribers: cfe-commits, Prazek
Differential Revision: https://reviews.llvm.org/D31050
llvm-svn: 298639
After r297760, __isOSVersionAtLeast in compiler-rt loads the CoreFoundation
symbols at runtime. This means that `@available` will always fail when used in a
binary without a linked CoreFoundation.
This commit forces Clang to emit a reference to a CoreFoundation symbol when
`@available` is used to ensure that linking will fail when CoreFoundation isn't
linked with the build product.
rdar://31039592
Differential Revision: https://reviews.llvm.org/D30977
llvm-svn: 298588
It seems MS headers have started using __readgsqword, and since it's
used in a header that doesn't include intrin.h, we can't implement it as
an inline function anymore.
That was already the case for __readfsdword, which Saleem added support
for in r220859. This patch reuses that codegen to implement all of
__read[fg]s{byte,word,dword,qword}.
Differential Revision: https://reviews.llvm.org/D31248
llvm-svn: 298538
explaining why we have to ignore errors here even though in other parts
of codegen we can be more strict with builtins.
Also add a test case based on the code in a TSan test that found this
issue.
llvm-svn: 298494
declarations and calls instead of just definitions, and then teach it to
*not* attach such attributes even if the source code contains them.
This follows the design direction discussed on cfe-dev here:
http://lists.llvm.org/pipermail/cfe-dev/2017-January/052066.html
The idea is that for C standard library builtins, even if the library
vendor chooses to annotate their routines with __attribute__((nonnull)),
we will ignore those attributes which pertain to pointer arguments that
have an associated size. This allows the widespread (and seemingly
reasonable) pattern of calling these routines with a null pointer and
a zero size. I have only done this for the library builtins currently
recognized by Clang, but we can now trivially add to this set. This will
be controllable with -fno-builtin if anyone should care to do so.
Note that this does *not* change the AST. As a consequence, warnings,
static analysis, and source code rewriting are not impacted.
This isn't even a regression on any platform as neither Clang nor LLVM
have ever put 'nonnull' onto these arguments for declarations. All this
patch does is enable it on other declarations while preventing us from
ever accidentally enabling it on these libc functions due to a library
vendor.
It will also allow any other libraries using this annotation to gain
optimizations based on the annotation even when only a declaration is
visible.
llvm-svn: 298491
Summary:
Because SamplePGO passes will be invoked twice in ThinLTO build: once at compile phase, the other at backend. We want to make sure the IR at the 2nd phase matches the hot part in pro
file, thus we do not want to inline hot callsites in the first phase.
Reviewers: tejohnson, eraman
Reviewed By: tejohnson
Subscribers: mehdi_amini, cfe-commits, Prazek
Differential Revision: https://reviews.llvm.org/D31202
llvm-svn: 298429
Setting dllexport on a declaration has no effect, as we do not emit export
directives for declarations.
Part of the fix for PR32334.
Differential Revision: https://reviews.llvm.org/D31162
llvm-svn: 298330
In doing so, clean up the MD5 interface a little. Most
existing users only care about the lower 8 bytes of an MD5,
but for some users that care about the upper and lower,
there wasn't a good interface. Furthermore, consumers
of the MD5 checksum were required to handle endianness
details on their own, so it seems reasonable to abstract
this into a nicer interface that just gives you the right
value.
Differential Revision: https://reviews.llvm.org/D31105
llvm-svn: 298322
This is a follow-up to r297700 (Add a nullability sanitizer).
It addresses some FIXME's re: using nullability-specific diagnostic
handlers from compiler-rt, now that the necessary handlers exist.
check-ubsan test updates to follow.
llvm-svn: 297750
Teach UBSan to detect when a value with the _Nonnull type annotation
assumes a null value. Call expressions, initializers, assignments, and
return statements are all checked.
Because _Nonnull does not affect IRGen, the new checks are disabled by
default. The new driver flags are:
-fsanitize=nullability-arg (_Nonnull violation in call)
-fsanitize=nullability-assign (_Nonnull violation in assignment)
-fsanitize=nullability-return (_Nonnull violation in return stmt)
-fsanitize=nullability (all of the above)
This patch builds on top of UBSan's existing support for detecting
violations of the nonnull attributes ('nonnull' and 'returns_nonnull'),
and relies on the compiler-rt support for those checks. Eventually we
will need to update the diagnostic messages in compiler-rt (there are
FIXME's for this, which will be addressed in a follow-up).
One point of note is that the nullability-return check is only allowed
to kick in if all arguments to the function satisfy their nullability
preconditions. This makes it necessary to emit some null checks in the
function body itself.
Testing: check-clang and check-ubsan. I also built some Apple ObjC
frameworks with an asserts-enabled compiler, and verified that we get
valid reports.
Differential Revision: https://reviews.llvm.org/D30762
llvm-svn: 297700
Change ASTFileSignature from a random 32-bit number to the hash of the
PCM content.
- Move definition ASTFileSignature to Basic/Module.h so Module and
ASTSourceDescriptor can use it.
- Change the signature from uint64_t to std::array<uint32_t,5>.
- Stop using (saving/reading) the size and modification time of PCM
files when there is a valid SIGNATURE.
- Add UNHASHED_CONTROL_BLOCK, and use it to store the SIGNATURE record
and other records that shouldn't affect the hash. Because implicit
modules reuses the same file for multiple levels of -Werror, this
includes DIAGNOSTIC_OPTIONS and DIAG_PRAGMA_MAPPINGS.
This helps to solve a PCH + implicit Modules dependency issue: PCH files
are handled by the external build system, whereas implicit modules are
handled by internal compiler build system. This prevents invalidating a
PCH when the compiler overwrites a PCM file with the same content
(modulo the diagnostic differences).
Design and original patch by Manman Ren!
llvm-svn: 297655
x86 has undef SSE/AVX intrinsics that should represent a bogus register operand.
This is not the same as LLVM's undef value which can take on multiple bit patterns.
There are better solutions / follow-ups to this discussed here:
https://bugs.llvm.org/show_bug.cgi?id=32176
...but this should prevent miscompiles with a one-line code change.
Differential Revision: https://reviews.llvm.org/D30834
llvm-svn: 297588
It's possible to load out-of-range values from bitfields backed by a
boolean or an enum. Check for UB loads from bitfields.
This is the motivating example:
struct S {
BOOL b : 1; // Signed ObjC BOOL.
};
S s;
s.b = 1; // This is actually stored as -1.
if (s.b == 1) // Evaluates to false, -1 != 1.
...
Changes since the original commit:
- Single-bit bools are a special case (see CGF::EmitFromMemory), and we
can't avoid dealing with them when loading from a bitfield. Don't try to
insert a check in this case.
Differential Revision: https://reviews.llvm.org/D30423
llvm-svn: 297389
It's possible to load out-of-range values from bitfields backed by a
boolean or an enum. Check for UB loads from bitfields.
This is the motivating example:
struct S {
BOOL b : 1; // Signed ObjC BOOL.
};
S s;
s.b = 1; // This is actually stored as -1.
if (s.b == 1) // Evaluates to false, -1 != 1.
...
Differential Revision: https://reviews.llvm.org/D30423
llvm-svn: 297298
This patch honors the unaligned type qualifier (currently available through he
keyword __unaligned and -fms-extensions) in CodeGen. In the current form the
patch affects declarations and expressions. It does not affect fields of
classes.
Differential Revision: https://reviews.llvm.org/D30166
llvm-svn: 297276
Summary:
Because of the existence branches out of GNU statement expressions, it
is possible that emitting cleanups for a full expression may cause the
new insertion point to not be dominated by the result of the inner
expression. Consider this example:
struct Foo { Foo(); ~Foo(); int x; };
int g(Foo, int);
int f(bool cond) {
int n = g(Foo(), ({ if (cond) return 0; 42; }));
return n;
}
Before this change, result of the call to 'g' did not dominate its use
in the store to 'n'. The early return exit from the statement expression
branches to a shared cleanup block, which ends in a switch between the
fallthrough destination (the assignment to 'n') or the function exit
block.
This change solves the problem by spilling and reloading expression
evaluation results when any of the active cleanups have branches.
I audited the other call sites of enterFullExpression, and they don't
appear to keep and Values live across the site of the cleanup, except in
ARC code. I wasn't able to create a test case for ARC that exhibits this
problem, though.
Reviewers: rjmccall, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D30590
llvm-svn: 297084
Summary:
Added co_return statement emission.
Tweaked coro-alloc.cpp test to use co_return to trigger coroutine processing instead of co_await, since this change starts emitting the body of the coroutine and await expression handling has not been upstreamed yet.
Reviewers: rsmith, majnemer, EricWF, aaron.ballman
Reviewed By: rsmith
Subscribers: majnemer, llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D29979
llvm-svn: 297076
block copy/destroy routines
This is a preparation commit for work on merging unique block copy/destroy
helper functions.
rdar://22950898
Differential Revision: https://reviews.llvm.org/D30345
llvm-svn: 297023
Summary:
Functions with the "xray_log_args" attribute will tell LLVM to emit a special
XRay sled for compiler-rt to copy any call arguments to your logging handler.
Reviewers: dberris
Reviewed By: dberris
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D29704
llvm-svn: 296999
UBSan's nonnull argument check applies when a parameter has the
"nonnull" attribute. The check currently works for FunctionDecls, but
not for ObjCMethodDecls. This patch extends the check to work for ObjC.
Differential Revision: https://reviews.llvm.org/D30599
llvm-svn: 296996
easily extend the aggregate-builder API. Stupid missing language
features.
Also add APIs for constructing a relative reference and computing
the offset of a position from the start of the initializer.
llvm-svn: 296979
When clang emits an inheriting C++ constructor it may inline code
during the CodeGen phase. This patch ensures that any debug info in
this inlined code gets a proper inlined location. Otherwise we can end
up with invalid debug info metadata, since all inlined local variables
and function arguments would be reparented into the call site.
Analogous to ApplyInlineLocation this patch introduces a
ApplyInlineDebugLocation scoped helper to facilitate entering an
inlined scope and cleaning up afterwards.
This fixes one of the issues discovered in PR32042.
rdar://problem/30679307
llvm-svn: 296388
Essentially, as a base class constructor does not construct virtual bases, such
a constructor for an abstract class does not need the corresponding base class
construction to be valid, and likewise for destructors.
This creates an awkward situation: clang will sometimes generate references to
the complete object and deleting destructors for an abstract class (it puts
them in the construction vtable for a derived class). But we can't generate a
"correct" version of these because we can't generate references to base class
constructors any more (if they're template specializations, say, we might not
have instantiated them and can't assume any other TU will emit a copy).
Fortunately, we don't need to, since no correct program can ever invoke them,
so instead emit symbols that just trap.
We should stop emitting references to these symbols, but still need to emit
definitions for compatibility.
llvm-svn: 296275
2nd attempt: the first was in r296231, but it had a use after lifetime
bug.
Clang has logic to lower certain conditional expressions directly into llvm
select instructions. However, it does not emit the correct profile counter
increment as it does this: it emits an unconditional increment of the counter
for the 'then branch', even if the value selected is from the 'else branch'
(this is PR32019).
That means, given the following snippet, we would report that "0" is selected
twice, and that "1" is never selected:
int f1(int x) {
return x ? 0 : 1;
^2 ^0
}
f1(0);
f1(1);
Fix the problem by using the instrprof_increment_step intrinsic to do the
proper increment.
llvm-svn: 296245
Clang has logic to lower certain conditional expressions directly into
llvm select instructions. However, it does not emit the correct profile
counter increment as it does this: it emits an unconditional increment
of the counter for the 'then branch', even if the value selected is from
the 'else branch' (this is PR32019).
That means, given the following snippet, we would report that "0" is
selected twice, and that "1" is never selected:
int f1(int x) {
return x ? 0 : 1;
^2 ^0
}
f1(0);
f1(1);
Fix the problem by using the instrprof_increment_step intrinsic to do
the proper increment.
llvm-svn: 296231
Teach ubsan to diagnose remainder operations which have undefined
behavior due to signed overflow (e.g INT_MIN % -1).
Differential Revision: https://reviews.llvm.org/D29437
llvm-svn: 296214
C requires the operands of arithmetic expressions to be promoted if
their types are smaller than an int. Ubsan emits overflow checks when
this sort of type promotion occurs, even if there is no way to actually
get an overflow with the promoted type.
This patch teaches clang how to omit the superflous overflow checks
(addressing PR20193).
Testing: check-clang and check-ubsan.
Differential Revision: https://reviews.llvm.org/D29369
llvm-svn: 296213
The goal of this is to fix a bug in modules where we'd merge
FunctionDecls that differed in their pass_object_size attributes. Since
we can overload on the presence of pass_object_size attributes, this
behavior is incorrect.
We don't represent `N` in `pass_object_size(N)` as part of
ExtParameterInfo, since it's an error to overload solely on the value of
N. This means that we have a bug if we have two modules that declare
functions that differ only in their pass_object_size attrs, like so:
// In module A, from a.h
void foo(char *__attribute__((pass_object_size(0))));
// In module B, from b.h
void foo(char *__attribute__((pass_object_size(1))));
// In module C, in main.c
#include "a.h"
#include "b.h"
At the moment, we'll merge the foo decls, when we should instead emit a
diagnostic about an invalid overload. We seem to have similar (silent)
behavior if we overload only on the return type of `foo` instead; I'll
try to find a good place to put a FIXME (or I'll just file a bug) soon.
This patch also fixes a bug where we'd not output the proper extended
parameter info for declarations with pass_object_size attrs.
llvm-svn: 296076
Fix the fact that we don't assign profile counters to constructors in
classes with virtual bases, or constructors with variadic parameters.
Differential Revision: https://reviews.llvm.org/D30131
llvm-svn: 296062
This patch makes use of the prefix/suffix ABI argument distinction that
was introduced in r295870, so that we now emit ExtParameterInfo at the
correct offset for member calls that have added ABI arguments. I don't
see a good way to test the generated param info, since we don't actually
seem to use it in CGFunctionInfo outside of Swift. Any
suggestions/thoughts for how to better test this are welcome. :)
This patch also fixes a small bug with inheriting constructors: if we
decide not to pass args into an base class ctor, we would still
generate ExtParameterInfo as though we did. The added test-case is for
that behavior.
llvm-svn: 296024
This fixes an assertion failure in cases where we had expression
statements that declared variables nested inside of pass_object_size
args. Since we were emitting the same ExprStmt twice (once for the arg,
once for the @llvm.objectsize call), we were getting issues with
redefining locals.
This also means that we can be more lax about when we emit
@llvm.objectsize for pass_object_size args: since we're reusing the
arg's value itself, we don't have to care so much about side-effects.
llvm-svn: 295935
Summary: We implement structured exception handling (SEH) by generating filter functions for functions that use exceptions. Currently, we use associative comdats to ensure that the filter functions are preserved if and only if the functions we generated them for are preserved. This can lead to problems when generating COFF objects - LLVM may decide to inline a function that uses SEH and remove its body, at which point we will end up with a comdat that COFF cannot represent. To avoid running into that situation, this change makes us not use associative comdats for SEH filter functions. We can still get the benefits we used the associative comdats for: we will always preserve filter functions we use, and dead stripping can eliminate the ones we don't use.
Reviewers: rnk, pcc, ruiu
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D30117
llvm-svn: 295872
Meta: The ultimate goal is to teach ExtParameterInfo about
pass_object_size attributes. This is necessary for that, since our
ExtParameterInfo is a bit buggy in C++. I plan to actually make use of
this Prefix/Suffix info in the near future, but I like small
single-purpose changes. Especially when those changes are hard to
actually test...
At the moment, some of our C++-specific CodeGen pretends that ABIs can
only add arguments to the beginning of a function call. This isn't quite
correct: args can be appended to the end, as well. It hasn't mattered
much until now, since we seem to only use this "number of arguments
added" data when calculating the ExtParameterInfo to use when making a
CGFunctionInfo. Said ExtParameterInfo is currently only used for
ParameterABIs (Swift) and ns_consumed (ObjC).
So, this patch allows ABIs to indicate whether args they added were at
the beginning or end of an argument list. We can use this information to
emit ExtParameterInfos more correctly, though like said, that bit is
coming soon.
No tests since this is theoretically a nop.
llvm-svn: 295870
The following code would crash clang:
void foo(unsigned *const __attribute__((pass_object_size(0))));
void bar(unsigned *i) { foo(i); }
This is because we were always selecting the version of
`@llvm.objectsize` that takes an i8* in CodeGen. Passing an i32* as an
i8* makes LLVM very unhappy.
(Yes, I'm surprised that this remained uncaught for so long, too. :) )
As an added bonus, we'll now also use the appropriate address space when
emitting @llvm.objectsize calls.
llvm-svn: 295805
declaration declared using class template argument deduction.
Patch by Eric Fiselier (who is busy and asked me to commit this on his behalf)!
Differential Revision: https://reviews.llvm.org/D30082
llvm-svn: 295794
Summary: AddDiscriminator pass is only useful for sample pgo. This patch restricts AddDiscriminator to -fdebug-info-for-profiling so that it does not introduce unecessary debug size increases for non-sample-pgo builds.
Reviewers: dblaikie, aprantl
Reviewed By: dblaikie
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D30220
llvm-svn: 295764
This patch teaches ubsan to insert exactly one null check for the 'this'
pointer per method/lambda.
Previously, given a load of a member variable from an instance method
('this->x'), ubsan would insert a null check for 'this', and another
null check for '&this->x', before allowing the load to occur.
Similarly, given a call to a method from another method bound to the
same instance ('this->foo()'), ubsan would a redundant null check for
'this'. There is also a redundant null check in the case where the
object pointer is a reference ('Ref.foo()').
This patch teaches ubsan to remove the redundant null checks identified
above.
Testing: check-clang, check-ubsan, and a stage2 ubsan build.
I also compiled X86FastISel.cpp with -fsanitize=null using
patched/unpatched clangs based on r293572. Here are the number of null
checks emitted:
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 21767 |
| patched, -O0 | 10758 |
-------------------------------------
Changes since the initial commit:
- Don't introduce any unintentional object-size or alignment checks.
- Don't rely on IRGen of C labels in the test.
Differential Revision: https://reviews.llvm.org/D29530
llvm-svn: 295515
CodeGenFunction::EmitTypeCheck accepts a bool flag which controls
whether or not null checks are emitted. Make this a bit more flexible by
changing the bool to a SanitizerSet.
Needed for an upcoming change which deals with a scenario in which we
only want to emit null checks.
llvm-svn: 295514
This reverts commit r295401. It breaks the ubsan self-host. It inserts
object size checks once per C++ method which fire when the structure is
empty.
llvm-svn: 295494
With tasks, the cancel may happen in another task. This has a different
region info which means that we can't find it here.
Differential Revision: https://reviews.llvm.org/D30091
llvm-svn: 295474
This resolves a deadlock with the cancel directive when there is no explicit
cancellation point. In that case, the implicit barrier acts as cancellation
point. After removing the barrier after cancel, the now unmatched barrier for
the explicit cancellation point has to go as well.
This has probably worked before rL255992: With the calls for the explicit
barrier, it was sure that all threads passed a barrier before exiting.
Reported by Simon Convent and Joachim Protze!
Differential Revision: https://reviews.llvm.org/D30088
llvm-svn: 295473
This patch teaches ubsan to insert exactly one null check for the 'this'
pointer per method/lambda.
Previously, given a load of a member variable from an instance method
('this->x'), ubsan would insert a null check for 'this', and another
null check for '&this->x', before allowing the load to occur.
Similarly, given a call to a method from another method bound to the
same instance ('this->foo()'), ubsan would a redundant null check for
'this'. There is also a redundant null check in the case where the
object pointer is a reference ('Ref.foo()').
This patch teaches ubsan to remove the redundant null checks identified
above.
Testing: check-clang and check-ubsan. I also compiled X86FastISel.cpp
with -fsanitize=null using patched/unpatched clangs based on r293572.
Here are the number of null checks emitted:
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 21767 |
| patched, -O0 | 10758 |
-------------------------------------
Changes since the initial commit: don't rely on IRGen of C labels in the
test.
Differential Revision: https://reviews.llvm.org/D29530
llvm-svn: 295401
This patch teaches ubsan to insert exactly one null check for the 'this'
pointer per method/lambda.
Previously, given a load of a member variable from an instance method
('this->x'), ubsan would insert a null check for 'this', and another
null check for '&this->x', before allowing the load to occur.
Similarly, given a call to a method from another method bound to the
same instance ('this->foo()'), ubsan would a redundant null check for
'this'. There is also a redundant null check in the case where the
object pointer is a reference ('Ref.foo()').
This patch teaches ubsan to remove the redundant null checks identified
above.
Testing: check-clang and check-ubsan. I also compiled X86FastISel.cpp
with -fsanitize=null using patched/unpatched clangs based on r293572.
Here are the number of null checks emitted:
-------------------------------------
| Setup | # of null checks |
-------------------------------------
| unpatched, -O0 | 21767 |
| patched, -O0 | 10758 |
-------------------------------------
Differential Revision: https://reviews.llvm.org/D29530
llvm-svn: 295391
This patch implements codegen for the reduction clause on
any teams construct for elementary data types. It builds
on parallel reductions on the GPU. Subsequently,
the team master writes to a unique location in a global
memory scratchpad. The last team to do so loads and
reduces this array to calculate the final result.
This patch emits two helper functions that are used by
the OpenMP runtime on the GPU to perform reductions across
teams.
Patch by Tian Jin in collaboration with Arpith Jacob
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29879
llvm-svn: 295335
This patch implements codegen for the reduction clause on
any parallel construct for elementary data types. An efficient
implementation requires hierarchical reduction within a
warp and a threadblock. It is complicated by the fact that
variables declared in the stack of a CUDA thread cannot be
shared with other threads.
The patch creates a struct to hold reduction variables and
a number of helper functions. The OpenMP runtime on the GPU
implements reduction algorithms that uses these helper
functions to perform reductions within a team. Variables are
shared between CUDA threads using shuffle intrinsics.
An implementation of reductions on the NVPTX device is
substantially different to that of CPUs. However, this patch
is written so that there are minimal changes to the rest of
OpenMP codegen.
The implemented design allows the compiler and runtime to be
decoupled, i.e., the runtime does not need to know of the
reduction operation(s), the type of the reduction variable(s),
or the number of reductions. The design also allows reuse of
host codegen, with appropriate specialization for the NVPTX
device.
While the patch does introduce a number of abstractions, the
expected use case calls for inlining of the GPU OpenMP runtime.
After inlining and optimizations in LLVM, these abstractions
are unwound and performance of OpenMP reductions is comparable
to CUDA-canonical code.
Patch by Tian Jin in collaboration with Arpith Jacob
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29758
llvm-svn: 295333
This patch implements codegen for the reduction clause on
any parallel construct for elementary data types. An efficient
implementation requires hierarchical reduction within a
warp and a threadblock. It is complicated by the fact that
variables declared in the stack of a CUDA thread cannot be
shared with other threads.
The patch creates a struct to hold reduction variables and
a number of helper functions. The OpenMP runtime on the GPU
implements reduction algorithms that uses these helper
functions to perform reductions within a team. Variables are
shared between CUDA threads using shuffle intrinsics.
An implementation of reductions on the NVPTX device is
substantially different to that of CPUs. However, this patch
is written so that there are minimal changes to the rest of
OpenMP codegen.
The implemented design allows the compiler and runtime to be
decoupled, i.e., the runtime does not need to know of the
reduction operation(s), the type of the reduction variable(s),
or the number of reductions. The design also allows reuse of
host codegen, with appropriate specialization for the NVPTX
device.
While the patch does introduce a number of abstractions, the
expected use case calls for inlining of the GPU OpenMP runtime.
After inlining and optimizations in LLVM, these abstractions
are unwound and performance of OpenMP reductions is comparable
to CUDA-canonical code.
Patch by Tian Jin in collaboration with Arpith Jacob
Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29758
llvm-svn: 295319
Removed ndrange_t as Clang builtin type and added
as a struct type in the OpenCL header.
Use type name to do the Sema checking in enqueue_kernel
and modify IR generation accordingly.
Review: D28058
Patch by Dmitry Borisenkov!
llvm-svn: 295311
Destructor references are not modelled explicitly in the AST. This adds
checks for destructor calls due to variable definitions and temporaries.
If a dllimport function references a non-dllimport destructor, it must
not be emitted available_externally, as the referenced destructor might
live across the DLL boundary and isn't exported.
llvm-svn: 295258
The function is used to check whether a type is a class with
non-dllimport destructor. It needs to look through typedefs and array
types.
llvm-svn: 295257
block or lambda.
This is a follow-up to r281682, which fixed a bug in computeBlockInfo
where the captured VarDecl's type, rather than the captured field type
of the enclosing lambda or block, was used to compute the layout of a
block.
This commit makes similar changes to enterBlockScope. This is necessary
to correctly determine whether a block capture requires cleanup.
rdar://problem/30388124
llvm-svn: 295034
This bypasses integer sanitization checks which are redundant on the expression since it's been checked by Sema. Fixes a clang codegen assertion on "void test() { new int[0+1]{0}; }" when building with -fsanitize=signed-integer-overflow.
llvm-svn: 295006
Use # as the comment leader for AArch64 auto-release elision marker.
This is to keep it in sync with the value used in swift. When building
libdispatch for Linux AArch64, the auto-release elision marker was
emitted. However, ELF uses # as the comment leader while MachO accepts
both ; and #. Use the common marker for it instead.
llvm-svn: 294877