With this, FMF(contract) becomes an alternative way to express the request to
contract.
These are currently only propagated for FMul, FAdd and FSub. The rest will be
added as more FMFs are hooked up for this.
This is toward fixing PR25721.
Differential Revision: https://reviews.llvm.org/D31168
llvm-svn: 299469
FPContractModeKind is the codegen option flag which is already ternary (off,
on, fast). This makes it universally the type for the contractable info
across the front-end:
* In FPOptions (i.e. in the Sema + in the expression nodes).
* In LangOpts::DefaultFPContractMode which is the option that initializes
FPOptions in the Sema.
Another way to look at this change is that before fp-contractable on/off were
the only states handled to the front-end:
* For "on", FMA folding was performed by the front-end
* For "fast", we simply forwarded the flag to TargetOptions to handle it in
LLVM
Now off/on/fast are all exposed because for fast we will generate
fast-math-flags during CodeGen.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
---
This is a recommit of r299027 with an adjustment to the test
CodeGenCUDA/fp-contract.cu. The test assumed that even
though -ffp-contract=on is passed FE-based folding of FMA won't happen.
This is obviously wrong since the user is asking for this explicitly with the
option. CUDA is different that -ffp-contract=fast is on by default.
The test used to "work" because contract=fast and contract=on were maintained
separately and we didn't fold in the FE because contract=fast was on due to
the target-default. This patch consolidates the contract=on/fast/off state
into a ternary state hence the change in behavior.
---
Differential Revision: https://reviews.llvm.org/D31167
llvm-svn: 299033
FPContractModeKind is the codegen option flag which is already ternary (off,
on, fast). This makes it universally the type for the contractable info
across the front-end:
* In FPOptions (i.e. in the Sema + in the expression nodes).
* In LangOpts::DefaultFPContractMode which is the option that initializes
FPOptions in the Sema.
Another way to look at this change is that before fp-contractable on/off were
the only states handled to the front-end:
* For "on", FMA folding was performed by the front-end
* For "fast", we simply forwarded the flag to TargetOptions to handle it in
LLVM
Now off/on/fast are all exposed because for fast we will generate
fast-math-flags during CodeGen.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
Differential Revision: https://reviews.llvm.org/D31167
llvm-svn: 299027
Sema holds the current FPOptions which is adjusted by 'pragma STDC
FP_CONTRACT'. This then gets propagated into expression nodes as they are
built.
This encapsulates FPOptions so that this propagation happens opaquely rather
than directly with the fp_contractable on/off bit. This allows controlled
transitioning of fp_contractable to a ternary value (off, on, fast). It will
also allow adding more fast-math flags later.
This is toward moving fp-contraction=fast from an LLVM TargetOption to a
FastMathFlag in order to fix PR25721.
Differential Revision: https://reviews.llvm.org/D31166
llvm-svn: 298877
Details:
Emit suspend expression which roughly looks like:
auto && x = CommonExpr();
if (!x.await_ready()) {
llvm_coro_save();
x.await_suspend(...); (*)
llvm_coro_suspend(); (**)
}
x.await_resume();
where the result of the entire expression is the result of x.await_resume()
(*) If x.await_suspend return type is bool, it allows to veto a suspend:
if (x.await_suspend(...))
llvm_coro_suspend();
(**) llvm_coro_suspend() encodes three possible continuations as a switch instruction:
%where-to = call i8 @llvm.coro.suspend(...)
switch i8 %where-to, label %coro.ret [ ; jump to epilogue to suspend
i8 0, label %yield.ready ; go here when resumed
i8 1, label %yield.cleanup ; go here when destroyed
]
llvm-svn: 298784
Teach UBSan to detect when a value with the _Nonnull type annotation
assumes a null value. Call expressions, initializers, assignments, and
return statements are all checked.
Because _Nonnull does not affect IRGen, the new checks are disabled by
default. The new driver flags are:
-fsanitize=nullability-arg (_Nonnull violation in call)
-fsanitize=nullability-assign (_Nonnull violation in assignment)
-fsanitize=nullability-return (_Nonnull violation in return stmt)
-fsanitize=nullability (all of the above)
This patch builds on top of UBSan's existing support for detecting
violations of the nonnull attributes ('nonnull' and 'returns_nonnull'),
and relies on the compiler-rt support for those checks. Eventually we
will need to update the diagnostic messages in compiler-rt (there are
FIXME's for this, which will be addressed in a follow-up).
One point of note is that the nullability-return check is only allowed
to kick in if all arguments to the function satisfy their nullability
preconditions. This makes it necessary to emit some null checks in the
function body itself.
Testing: check-clang and check-ubsan. I also built some Apple ObjC
frameworks with an asserts-enabled compiler, and verified that we get
valid reports.
Differential Revision: https://reviews.llvm.org/D30762
llvm-svn: 297700
Summary:
Because of the existence branches out of GNU statement expressions, it
is possible that emitting cleanups for a full expression may cause the
new insertion point to not be dominated by the result of the inner
expression. Consider this example:
struct Foo { Foo(); ~Foo(); int x; };
int g(Foo, int);
int f(bool cond) {
int n = g(Foo(), ({ if (cond) return 0; 42; }));
return n;
}
Before this change, result of the call to 'g' did not dominate its use
in the store to 'n'. The early return exit from the statement expression
branches to a shared cleanup block, which ends in a switch between the
fallthrough destination (the assignment to 'n') or the function exit
block.
This change solves the problem by spilling and reloading expression
evaluation results when any of the active cleanups have branches.
I audited the other call sites of enterFullExpression, and they don't
appear to keep and Values live across the site of the cleanup, except in
ARC code. I wasn't able to create a test case for ARC that exhibits this
problem, though.
Reviewers: rjmccall, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D30590
llvm-svn: 297084
2nd attempt: the first was in r296231, but it had a use after lifetime
bug.
Clang has logic to lower certain conditional expressions directly into llvm
select instructions. However, it does not emit the correct profile counter
increment as it does this: it emits an unconditional increment of the counter
for the 'then branch', even if the value selected is from the 'else branch'
(this is PR32019).
That means, given the following snippet, we would report that "0" is selected
twice, and that "1" is never selected:
int f1(int x) {
return x ? 0 : 1;
^2 ^0
}
f1(0);
f1(1);
Fix the problem by using the instrprof_increment_step intrinsic to do the
proper increment.
llvm-svn: 296245
Clang has logic to lower certain conditional expressions directly into
llvm select instructions. However, it does not emit the correct profile
counter increment as it does this: it emits an unconditional increment
of the counter for the 'then branch', even if the value selected is from
the 'else branch' (this is PR32019).
That means, given the following snippet, we would report that "0" is
selected twice, and that "1" is never selected:
int f1(int x) {
return x ? 0 : 1;
^2 ^0
}
f1(0);
f1(1);
Fix the problem by using the instrprof_increment_step intrinsic to do
the proper increment.
llvm-svn: 296231
Teach ubsan to diagnose remainder operations which have undefined
behavior due to signed overflow (e.g INT_MIN % -1).
Differential Revision: https://reviews.llvm.org/D29437
llvm-svn: 296214
C requires the operands of arithmetic expressions to be promoted if
their types are smaller than an int. Ubsan emits overflow checks when
this sort of type promotion occurs, even if there is no way to actually
get an overflow with the promoted type.
This patch teaches clang how to omit the superflous overflow checks
(addressing PR20193).
Testing: check-clang and check-ubsan.
Differential Revision: https://reviews.llvm.org/D29369
llvm-svn: 296213
This re-applies r293343 (reverts commit r293475) with a fix for an
assertion failure caused by a missing integer cast. I tested this patch
by using the built compiler to compile X86FastISel.cpp.o with ubsan.
Original commit message:
Ubsan does not report UB shifts in some cases where the shift exponent
needs to be truncated to match the type of the shift base. We perform a
range check on the truncated shift amount, leading to false negatives.
Fix the issue (PR27271) by performing the range check on the original
shift amount.
Differential Revision: https://reviews.llvm.org/D29234
llvm-svn: 293572
Ubsan does not report UB shifts in some cases where the shift exponent
needs to be truncated to match the type of the shift base. We perform a
range check on the truncated shift amount, leading to false negatives.
Fix the issue (PR27271) by performing the range check on the original
shift amount.
Differential Revision: https://reviews.llvm.org/D29234
llvm-svn: 293343
This reverts commit r290171. It triggers a bunch of warnings, because
the new enumerator isn't handled in all switches. We want a warning-free
build.
Replied on the commit with more details.
llvm-svn: 290173
Summary: Enabling the compression of CLK_NULL_QUEUE to variable of type queue_t.
Reviewers: Anastasia
Subscribers: cfe-commits, yaxunl, bader
Differential Revision: https://reviews.llvm.org/D27569
llvm-svn: 290171
This adds a way for us to version any UBSan handler by itself.
The patch overrides D21289 for a better implementation (we're able to
rev up a single handler).
After this, then we can land a slight modification of D19667+D19668.
We probably don't want to keep all the versions in compiler-rt (maybe we
want to deprecate on one release and remove the old handler on the next
one?), but with this patch we will loudly fail to compile when mixing
incompatible handler calls, instead of silently compiling and then
providing bad error messages.
Reviewers: kcc, samsonov, rsmith, vsk
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D21695
llvm-svn: 289444
initialization of each array element:
* ArrayInitLoopExpr is a prvalue of array type with two subexpressions:
a common expression (an OpaqueValueExpr) that represents the up-front
computation of the source of the initialization, and a subexpression
representing a per-element initializer
* ArrayInitIndexExpr is a prvalue of type size_t representing the current
position in the loop
This will be used to replace the creation of explicit index variables in lambda
capture of arrays and copy/move construction of classes with array elements,
and also C++17 structured bindings of arrays by value (which inexplicably allow
copying an array by value, unlike all of C++'s other array declarations).
No uses of these nodes are introduced by this change, however.
llvm-svn: 289413
In amdgcn target, null pointers in global, constant, and generic address space take value 0 but null pointers in private and local address space take value -1. Currently LLVM assumes all null pointers take value 0, which results in incorrectly translated IR. To workaround this issue, instead of emit null pointers in local and private address space, a null pointer in generic address space is emitted and casted to local and private address space.
Tentative definition of global variables with non-zero initializer will have weak linkage instead of common linkage since common linkage requires zero initializer and does not have explicit section to hold the non-zero value.
Virtual member functions getNullPointer and performAddrSpaceCast are added to TargetCodeGenInfo which by default returns ConstantPointerNull and emitting addrspacecast instruction. A virtual member function getNullPointerValue is added to TargetInfo which by default returns 0. Each target can override these virtual functions to get target specific null pointer and the null pointer value for specific address space, and perform specific translations for addrspacecast.
Wrapper functions getNullPointer is added to CodegenModule and getTargetNullPointerValue is added to ASTContext to facilitate getting the target specific null pointers and their values.
This change has no effect on other targets except amdgcn target. Other targets can provide support of non-zero null pointer in a similar way.
This change only provides support for non-zero null pointer for C and OpenCL. Supporting for other languages will be added later incrementally.
Differential Revision: https://reviews.llvm.org/D26196
llvm-svn: 289252
__builtin_astype is used to cast OpenCL opaque types to other types, as such, it needs to be able to handle casting from and to pointer types correctly.
Current it cannot handle 1) casting between pointers of different addr spaces 2) casting between pointer type and non-pointer types.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D25123
llvm-svn: 283114
Pointers of certain GPUs in AMDGCN target in private address space is 32 bit but pointers in other address spaces are 64 bit. size_t type should be defined as 64 bit for these GPUs so that it could hold pointers in all address spaces. Also fixed issues in pointer arithmetic codegen by using pointer specific intptr type.
Differential Revision: https://reviews.llvm.org/D23361
llvm-svn: 279121
Use BB.getNextNode(), which returns nullptr on end(), instead of
&*BB.getIterator(), which is UB on end().
CodeGenFunction::createBasicBlock expects nullptr in this case already.
llvm-svn: 278898
Let the driver pass the option to frontend. Do not set precision metadata for division instructions when this option is set. Set function attribute "correctly-rounded-divide-sqrt-fp-math" based on this option.
Differential Revision: https://reviews.llvm.org/D22940
llvm-svn: 278155
Currently Clang use int32 to represent sampler_t, which have been a source of issue for some backends, because in some backends sampler_t cannot be represented by int32. They have to depend on kernel argument metadata and use IPA to find the sampler arguments and global variables and transform them to target specific sampler type.
This patch uses opaque pointer type opencl.sampler_t* for sampler_t. For each use of file-scope sampler variable, it generates a function call of __translate_sampler_initializer. For each initialization of function-scope sampler variable, it generates a function call of __translate_sampler_initializer.
Each builtin library can implement its own __translate_sampler_initializer(). Since the real sampler type tends to be architecture dependent, allowing it to be initialized by a library function simplifies backend design. A typical implementation of __translate_sampler_initializer could be a table lookup of real sampler literal values. Since its argument is always a literal, the returned pointer is known at compile time and easily optimized to finally become some literal values directly put into image read instructions.
This patch is partially based on Alexey Sotkin's work in Khronos Clang (3d4eec6162).
Differential Revision: https://reviews.llvm.org/D21567
llvm-svn: 277024
__builtin_astype does not generate correct LLVM IR for vec3 types. This patch inserts bitcasts to/from vec4 when necessary in addition to generating vector shuffle. Sema and codegen tests are added.
Differential Revision: http://reviews.llvm.org/D20133
llvm-svn: 272153
I couldn't find any documentation that this form existed either. Nor is there documentation for one of the remaining two forms, but there is a testcase that uses it.
llvm-svn: 269879
This patch corresponds to reviews:
http://reviews.llvm.org/D15120http://reviews.llvm.org/D19125
It adds support for the __float128 keyword, literals and target feature to
enable it. Based on the latter of the two aforementioned reviews, this feature
is enabled on Linux on i386/X86 as well as SystemZ.
This is also the second attempt in commiting this feature. The first attempt
did not enable it on required platforms which caused failures when compiling
type_traits with -std=gnu++11.
If you see failures with compiling this header on your platform after this
commit, it is likely that your platform needs to have this feature enabled.
llvm-svn: 268898
Since this patch provided support for the __float128 type but disabled it
on all platforms by default, some platforms can't compile type_traits with
-std=gnu++11 since there is a specialization with __float128.
This reverts the patch until D19125 is approved (i.e. we know which platforms
need this support enabled).
llvm-svn: 266460
This patch corresponds to review:
http://reviews.llvm.org/D15120
It adds support for the __float128 keyword, literals and a target feature to
enable it. This support is disabled by default on all targets and any target
that has support for this type is free to add it.
Based on feedback that I've received from target maintainers, this appears to
be the right thing for most targets. I have not heard from the maintainers of
X86 which I believe supports this type. I will subsequently investigate the
impact of enabling this on X86.
llvm-svn: 266186
In codegen different address spaces may be mapped to the same address
space for a target, e.g. in x86/x86-64 all address spaces are mapped
to 0. Therefore AddressSpaceConversion should be translated by
CreatePointerBitCastOrAddrSpaceCast instead of CreateAddrSpaceCast.
Differential Revision: http://reviews.llvm.org/D18713
llvm-svn: 266107
Revert the two changes to thread CodeGenOptions into the TargetInfo allocation
and to fix the layering violation by moving CodeGenOptions into Basic.
Code Generation is arguably not particularly "basic". This addresses Richard's
post-commit review comments. This change purely does the mechanical revert and
will be followed up with an alternate approach to thread the desired information
into TargetInfo.
llvm-svn: 265806
This is a mechanical move of CodeGenOptions from libFrontend to libBasic. This
fixes the layering violation introduced earlier by threading CodeGenOptions into
TargetInfo. It should also fix the modules based self-hosting builds. NFC.
llvm-svn: 265702
Summary: See LLVM change D18775 for details, this change depends on it.
Reviewers: jyknight, reames
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18776
llvm-svn: 265569
Fixes PR11517 for SPARC.
On most targets, clang lowers va_arg itself, eschewing the use of the
llvm vaarg instruction. This is necessary (at least for now) as the type
argument to the vaarg instruction cannot represent all the ABI
information that is needed to support complex calling conventions.
However, on targets with a simpler varrags ABIs, the LLVM instruction
can work just fine, and clang can simply lower to it. Unfortunately,
even on such targets, vaarg with a struct argument would fail, because
the default lowering to vaarg was naive: it didn't take into account the
ABI attribute computed by classifyArgumentType. In particular, for the
DefaultABIInfo, structs are supposed to be passed indirectly and so
llvm's vaarg instruction should be emitted with a pointer argument.
Now, vaarg instruction emission is able to use computed ABIArgInfo for
the provided argument type, which allows the default ABI support to work
for structs too.
I haven't touched the EmitVAArg implementation for PPC32_SVR4 or XCore,
although I believe both are now redundant, and could be switched over to
use the default implementation as well.
Differential Revision: http://reviews.llvm.org/D16154
llvm-svn: 261717
reclaiming a call result in order to ignore it or assign it
to an __unsafe_unretained variable. This avoids adding
an unwanted retain/release pair when the return value is
not actually returned autoreleased (e.g. when it is returned
from a nonatomic getter or a typical collection accessor).
This runtime function is only available on the latest Apple
OS releases; the backwards-compatibility story is that you
don't get the optimization unless your deployment target is
recent enough. Sorry.
rdar://20530049
llvm-svn: 258962
In {CG,}ExprConstant.cpp, we weren't treating vector splats properly.
This patch makes us treat splats more properly.
Additionally, this patch adds a new cast kind which allows a bool->int
cast to result in -1 or 0, instead of 1 or 0 (for true and false,
respectively), so we can sanely model OpenCL bool->int casts in the AST.
Differential Revision: http://reviews.llvm.org/D14877
llvm-svn: 257559